00:00:00.001 Started by upstream project "spdk-dpdk-per-patch" build number 271 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.104 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.105 The recommended git tool is: git 00:00:00.105 using credential 00000000-0000-0000-0000-000000000002 00:00:00.108 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.131 Fetching changes from the remote Git repository 00:00:00.135 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.165 Using shallow fetch with depth 1 00:00:00.165 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.165 > git --version # timeout=10 00:00:00.194 > git --version # 'git version 2.39.2' 00:00:00.194 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.212 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.212 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.403 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.417 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.429 Checking out Revision 1c6ed56008363df82da0fcec030d6d5a1f7bd340 (FETCH_HEAD) 00:00:04.429 > git config core.sparsecheckout # timeout=10 00:00:04.439 > git read-tree -mu HEAD # timeout=10 00:00:04.454 > git checkout -f 1c6ed56008363df82da0fcec030d6d5a1f7bd340 # timeout=5 00:00:04.470 Commit message: "spdk-abi-per-patch: pass revision to subbuild" 00:00:04.470 > git rev-list --no-walk 1c6ed56008363df82da0fcec030d6d5a1f7bd340 # timeout=10 00:00:04.557 [Pipeline] Start of Pipeline 00:00:04.588 [Pipeline] library 00:00:04.590 Loading library shm_lib@master 00:00:04.590 Library shm_lib@master is cached. Copying from home. 00:00:04.607 [Pipeline] node 00:00:04.618 Running on VM-host-SM16 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:04.620 [Pipeline] { 00:00:04.635 [Pipeline] catchError 00:00:04.637 [Pipeline] { 00:00:04.652 [Pipeline] wrap 00:00:04.659 [Pipeline] { 00:00:04.666 [Pipeline] stage 00:00:04.668 [Pipeline] { (Prologue) 00:00:04.689 [Pipeline] echo 00:00:04.691 Node: VM-host-SM16 00:00:04.698 [Pipeline] cleanWs 00:00:04.706 [WS-CLEANUP] Deleting project workspace... 00:00:04.706 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.711 [WS-CLEANUP] done 00:00:04.900 [Pipeline] setCustomBuildProperty 00:00:04.996 [Pipeline] httpRequest 00:00:05.016 [Pipeline] echo 00:00:05.017 Sorcerer 10.211.164.101 is alive 00:00:05.024 [Pipeline] httpRequest 00:00:05.028 HttpMethod: GET 00:00:05.029 URL: http://10.211.164.101/packages/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:05.029 Sending request to url: http://10.211.164.101/packages/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:05.030 Response Code: HTTP/1.1 200 OK 00:00:05.030 Success: Status code 200 is in the accepted range: 200,404 00:00:05.030 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:06.089 [Pipeline] sh 00:00:06.362 + tar --no-same-owner -xf jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:06.374 [Pipeline] httpRequest 00:00:06.399 [Pipeline] echo 00:00:06.401 Sorcerer 10.211.164.101 is alive 00:00:06.407 [Pipeline] httpRequest 00:00:06.410 HttpMethod: GET 00:00:06.411 URL: http://10.211.164.101/packages/spdk_89fd17309ebf03a59fb073615058a70b852baa8d.tar.gz 00:00:06.411 Sending request to url: http://10.211.164.101/packages/spdk_89fd17309ebf03a59fb073615058a70b852baa8d.tar.gz 00:00:06.412 Response Code: HTTP/1.1 200 OK 00:00:06.412 Success: Status code 200 is in the accepted range: 200,404 00:00:06.412 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_89fd17309ebf03a59fb073615058a70b852baa8d.tar.gz 00:00:28.157 [Pipeline] sh 00:00:28.440 + tar --no-same-owner -xf spdk_89fd17309ebf03a59fb073615058a70b852baa8d.tar.gz 00:00:31.731 [Pipeline] sh 00:00:32.018 + git -C spdk log --oneline -n5 00:00:32.018 89fd17309 bdev/raid: add qos for raid process 00:00:32.018 9645ea138 util: move module/sock/sock_kernel.h contents to net.c 00:00:32.018 e8671c893 util: add spdk_net_get_interface_name 00:00:32.018 7798a2572 scripts/nvmf_perf: set all NIC RX queues at once 00:00:32.018 986fe0958 scripts/nvmf_perf: indent multi-line strings 00:00:32.034 [Pipeline] sh 00:00:32.312 + git -C spdk/dpdk fetch https://review.spdk.io/gerrit/spdk/dpdk refs/changes/75/24275/1 00:00:33.246 From https://review.spdk.io/gerrit/spdk/dpdk 00:00:33.246 * branch refs/changes/75/24275/1 -> FETCH_HEAD 00:00:33.258 [Pipeline] sh 00:00:33.536 + git -C spdk/dpdk checkout FETCH_HEAD 00:00:34.104 Previous HEAD position was 08f3a46de7 pmdinfogen: avoid empty string in ELFSymbol() 00:00:34.104 HEAD is now at 6766bde469 eal/alarm_cancel: Fix thread starvation 00:00:34.126 [Pipeline] writeFile 00:00:34.144 [Pipeline] sh 00:00:34.424 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:34.438 [Pipeline] sh 00:00:34.718 + cat autorun-spdk.conf 00:00:34.718 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.718 SPDK_TEST_NVMF=1 00:00:34.718 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:34.718 SPDK_TEST_USDT=1 00:00:34.718 SPDK_TEST_NVMF_MDNS=1 00:00:34.718 SPDK_RUN_UBSAN=1 00:00:34.718 NET_TYPE=virt 00:00:34.718 SPDK_JSONRPC_GO_CLIENT=1 00:00:34.718 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:34.724 RUN_NIGHTLY= 00:00:34.726 [Pipeline] } 00:00:34.744 [Pipeline] // stage 00:00:34.761 [Pipeline] stage 00:00:34.763 [Pipeline] { (Run VM) 00:00:34.778 [Pipeline] sh 00:00:35.056 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:35.056 + echo 'Start stage prepare_nvme.sh' 00:00:35.056 Start stage prepare_nvme.sh 00:00:35.056 + [[ -n 5 ]] 00:00:35.056 + disk_prefix=ex5 00:00:35.056 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:00:35.056 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:00:35.056 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:00:35.056 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.056 ++ SPDK_TEST_NVMF=1 00:00:35.056 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:35.056 ++ SPDK_TEST_USDT=1 00:00:35.056 ++ SPDK_TEST_NVMF_MDNS=1 00:00:35.056 ++ SPDK_RUN_UBSAN=1 00:00:35.056 ++ NET_TYPE=virt 00:00:35.056 ++ SPDK_JSONRPC_GO_CLIENT=1 00:00:35.056 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:35.056 ++ RUN_NIGHTLY= 00:00:35.056 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:35.056 + nvme_files=() 00:00:35.056 + declare -A nvme_files 00:00:35.056 + backend_dir=/var/lib/libvirt/images/backends 00:00:35.056 + nvme_files['nvme.img']=5G 00:00:35.056 + nvme_files['nvme-cmb.img']=5G 00:00:35.056 + nvme_files['nvme-multi0.img']=4G 00:00:35.056 + nvme_files['nvme-multi1.img']=4G 00:00:35.056 + nvme_files['nvme-multi2.img']=4G 00:00:35.056 + nvme_files['nvme-openstack.img']=8G 00:00:35.056 + nvme_files['nvme-zns.img']=5G 00:00:35.056 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:35.056 + (( SPDK_TEST_FTL == 1 )) 00:00:35.056 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:35.056 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:35.056 + for nvme in "${!nvme_files[@]}" 00:00:35.056 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:00:35.056 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:35.056 + for nvme in "${!nvme_files[@]}" 00:00:35.056 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:00:35.056 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:35.056 + for nvme in "${!nvme_files[@]}" 00:00:35.056 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:00:35.056 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:35.056 + for nvme in "${!nvme_files[@]}" 00:00:35.056 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:00:35.056 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:35.056 + for nvme in "${!nvme_files[@]}" 00:00:35.056 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:00:35.056 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:35.056 + for nvme in "${!nvme_files[@]}" 00:00:35.056 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:00:35.056 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:35.056 + for nvme in "${!nvme_files[@]}" 00:00:35.056 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:00:35.988 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:35.988 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:00:35.988 + echo 'End stage prepare_nvme.sh' 00:00:35.988 End stage prepare_nvme.sh 00:00:35.998 [Pipeline] sh 00:00:36.274 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:36.274 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora38 00:00:36.275 00:00:36.275 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:00:36.275 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:00:36.275 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:36.275 HELP=0 00:00:36.275 DRY_RUN=0 00:00:36.275 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:00:36.275 NVME_DISKS_TYPE=nvme,nvme, 00:00:36.275 NVME_AUTO_CREATE=0 00:00:36.275 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:00:36.275 NVME_CMB=,, 00:00:36.275 NVME_PMR=,, 00:00:36.275 NVME_ZNS=,, 00:00:36.275 NVME_MS=,, 00:00:36.275 NVME_FDP=,, 00:00:36.275 SPDK_VAGRANT_DISTRO=fedora38 00:00:36.275 SPDK_VAGRANT_VMCPU=10 00:00:36.275 SPDK_VAGRANT_VMRAM=12288 00:00:36.275 SPDK_VAGRANT_PROVIDER=libvirt 00:00:36.275 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:36.275 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:36.275 SPDK_OPENSTACK_NETWORK=0 00:00:36.275 VAGRANT_PACKAGE_BOX=0 00:00:36.275 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:36.275 FORCE_DISTRO=true 00:00:36.275 VAGRANT_BOX_VERSION= 00:00:36.275 EXTRA_VAGRANTFILES= 00:00:36.275 NIC_MODEL=e1000 00:00:36.275 00:00:36.275 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:00:36.275 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:39.557 Bringing machine 'default' up with 'libvirt' provider... 00:00:39.557 ==> default: Creating image (snapshot of base box volume). 00:00:39.816 ==> default: Creating domain with the following settings... 00:00:39.816 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721578557_dbc3f736cd1688e51c12 00:00:39.816 ==> default: -- Domain type: kvm 00:00:39.816 ==> default: -- Cpus: 10 00:00:39.816 ==> default: -- Feature: acpi 00:00:39.816 ==> default: -- Feature: apic 00:00:39.816 ==> default: -- Feature: pae 00:00:39.816 ==> default: -- Memory: 12288M 00:00:39.816 ==> default: -- Memory Backing: hugepages: 00:00:39.816 ==> default: -- Management MAC: 00:00:39.816 ==> default: -- Loader: 00:00:39.816 ==> default: -- Nvram: 00:00:39.816 ==> default: -- Base box: spdk/fedora38 00:00:39.816 ==> default: -- Storage pool: default 00:00:39.816 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721578557_dbc3f736cd1688e51c12.img (20G) 00:00:39.816 ==> default: -- Volume Cache: default 00:00:39.816 ==> default: -- Kernel: 00:00:39.816 ==> default: -- Initrd: 00:00:39.816 ==> default: -- Graphics Type: vnc 00:00:39.816 ==> default: -- Graphics Port: -1 00:00:39.816 ==> default: -- Graphics IP: 127.0.0.1 00:00:39.816 ==> default: -- Graphics Password: Not defined 00:00:39.816 ==> default: -- Video Type: cirrus 00:00:39.816 ==> default: -- Video VRAM: 9216 00:00:39.816 ==> default: -- Sound Type: 00:00:39.816 ==> default: -- Keymap: en-us 00:00:39.816 ==> default: -- TPM Path: 00:00:39.816 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:39.816 ==> default: -- Command line args: 00:00:39.816 ==> default: -> value=-device, 00:00:39.816 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:39.816 ==> default: -> value=-drive, 00:00:39.816 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:00:39.816 ==> default: -> value=-device, 00:00:39.816 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:39.816 ==> default: -> value=-device, 00:00:39.816 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:39.816 ==> default: -> value=-drive, 00:00:39.816 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:39.816 ==> default: -> value=-device, 00:00:39.816 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:39.816 ==> default: -> value=-drive, 00:00:39.816 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:39.816 ==> default: -> value=-device, 00:00:39.816 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:39.816 ==> default: -> value=-drive, 00:00:39.816 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:39.816 ==> default: -> value=-device, 00:00:39.816 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:39.816 ==> default: Creating shared folders metadata... 00:00:39.816 ==> default: Starting domain. 00:00:41.716 ==> default: Waiting for domain to get an IP address... 00:00:56.583 ==> default: Waiting for SSH to become available... 00:00:58.485 ==> default: Configuring and enabling network interfaces... 00:01:03.748 default: SSH address: 192.168.121.154:22 00:01:03.748 default: SSH username: vagrant 00:01:03.748 default: SSH auth method: private key 00:01:05.645 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:13.792 ==> default: Mounting SSHFS shared folder... 00:01:15.692 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:15.692 ==> default: Checking Mount.. 00:01:16.623 ==> default: Folder Successfully Mounted! 00:01:16.623 ==> default: Running provisioner: file... 00:01:17.556 default: ~/.gitconfig => .gitconfig 00:01:17.918 00:01:17.918 SUCCESS! 00:01:17.918 00:01:17.918 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:17.918 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:17.918 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:17.918 00:01:17.927 [Pipeline] } 00:01:17.946 [Pipeline] // stage 00:01:17.954 [Pipeline] dir 00:01:17.954 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:01:17.956 [Pipeline] { 00:01:17.970 [Pipeline] catchError 00:01:17.971 [Pipeline] { 00:01:17.985 [Pipeline] sh 00:01:18.276 + vagrant ssh-config --host vagrant 00:01:18.276 + sed -ne /^Host/,$p 00:01:18.276 + tee ssh_conf 00:01:22.464 Host vagrant 00:01:22.464 HostName 192.168.121.154 00:01:22.464 User vagrant 00:01:22.464 Port 22 00:01:22.464 UserKnownHostsFile /dev/null 00:01:22.464 StrictHostKeyChecking no 00:01:22.464 PasswordAuthentication no 00:01:22.464 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:22.464 IdentitiesOnly yes 00:01:22.464 LogLevel FATAL 00:01:22.464 ForwardAgent yes 00:01:22.464 ForwardX11 yes 00:01:22.464 00:01:22.478 [Pipeline] withEnv 00:01:22.480 [Pipeline] { 00:01:22.496 [Pipeline] sh 00:01:22.774 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:22.774 source /etc/os-release 00:01:22.774 [[ -e /image.version ]] && img=$(< /image.version) 00:01:22.774 # Minimal, systemd-like check. 00:01:22.774 if [[ -e /.dockerenv ]]; then 00:01:22.774 # Clear garbage from the node's name: 00:01:22.774 # agt-er_autotest_547-896 -> autotest_547-896 00:01:22.774 # $HOSTNAME is the actual container id 00:01:22.774 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:22.774 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:22.774 # We can assume this is a mount from a host where container is running, 00:01:22.774 # so fetch its hostname to easily identify the target swarm worker. 00:01:22.774 container="$(< /etc/hostname) ($agent)" 00:01:22.774 else 00:01:22.774 # Fallback 00:01:22.774 container=$agent 00:01:22.774 fi 00:01:22.774 fi 00:01:22.774 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:22.774 00:01:23.044 [Pipeline] } 00:01:23.065 [Pipeline] // withEnv 00:01:23.075 [Pipeline] setCustomBuildProperty 00:01:23.093 [Pipeline] stage 00:01:23.095 [Pipeline] { (Tests) 00:01:23.117 [Pipeline] sh 00:01:23.394 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:23.662 [Pipeline] sh 00:01:23.940 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:24.220 [Pipeline] timeout 00:01:24.221 Timeout set to expire in 40 min 00:01:24.222 [Pipeline] { 00:01:24.232 [Pipeline] sh 00:01:24.516 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:25.078 HEAD is now at 89fd17309 bdev/raid: add qos for raid process 00:01:25.090 [Pipeline] sh 00:01:25.366 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:25.636 [Pipeline] sh 00:01:25.911 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:26.184 [Pipeline] sh 00:01:26.460 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:01:26.717 ++ readlink -f spdk_repo 00:01:26.717 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:26.717 + [[ -n /home/vagrant/spdk_repo ]] 00:01:26.717 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:26.717 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:26.717 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:26.717 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:26.717 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:26.717 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:01:26.717 + cd /home/vagrant/spdk_repo 00:01:26.717 + source /etc/os-release 00:01:26.717 ++ NAME='Fedora Linux' 00:01:26.717 ++ VERSION='38 (Cloud Edition)' 00:01:26.717 ++ ID=fedora 00:01:26.717 ++ VERSION_ID=38 00:01:26.717 ++ VERSION_CODENAME= 00:01:26.717 ++ PLATFORM_ID=platform:f38 00:01:26.717 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:26.717 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:26.717 ++ LOGO=fedora-logo-icon 00:01:26.717 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:26.717 ++ HOME_URL=https://fedoraproject.org/ 00:01:26.717 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:26.717 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:26.717 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:26.717 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:26.717 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:26.717 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:26.717 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:26.717 ++ SUPPORT_END=2024-05-14 00:01:26.717 ++ VARIANT='Cloud Edition' 00:01:26.717 ++ VARIANT_ID=cloud 00:01:26.717 + uname -a 00:01:26.717 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:26.717 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:26.974 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:26.974 Hugepages 00:01:26.974 node hugesize free / total 00:01:26.974 node0 1048576kB 0 / 0 00:01:26.974 node0 2048kB 0 / 0 00:01:26.974 00:01:26.974 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:27.231 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:27.231 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:27.231 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:27.231 + rm -f /tmp/spdk-ld-path 00:01:27.231 + source autorun-spdk.conf 00:01:27.231 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.231 ++ SPDK_TEST_NVMF=1 00:01:27.231 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.231 ++ SPDK_TEST_USDT=1 00:01:27.231 ++ SPDK_TEST_NVMF_MDNS=1 00:01:27.231 ++ SPDK_RUN_UBSAN=1 00:01:27.231 ++ NET_TYPE=virt 00:01:27.231 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:27.231 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:27.231 ++ RUN_NIGHTLY= 00:01:27.231 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:27.231 + [[ -n '' ]] 00:01:27.231 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:27.231 + for M in /var/spdk/build-*-manifest.txt 00:01:27.231 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:27.231 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:27.231 + for M in /var/spdk/build-*-manifest.txt 00:01:27.231 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:27.231 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:27.231 ++ uname 00:01:27.231 + [[ Linux == \L\i\n\u\x ]] 00:01:27.232 + sudo dmesg -T 00:01:27.232 + sudo dmesg --clear 00:01:27.232 + dmesg_pid=5275 00:01:27.232 + sudo dmesg -Tw 00:01:27.232 + [[ Fedora Linux == FreeBSD ]] 00:01:27.232 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.232 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.232 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:27.232 + [[ -x /usr/src/fio-static/fio ]] 00:01:27.232 + export FIO_BIN=/usr/src/fio-static/fio 00:01:27.232 + FIO_BIN=/usr/src/fio-static/fio 00:01:27.232 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:27.232 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:27.232 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:27.232 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:27.232 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:27.232 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:27.232 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:27.232 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:27.232 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:27.232 Test configuration: 00:01:27.232 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.232 SPDK_TEST_NVMF=1 00:01:27.232 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.232 SPDK_TEST_USDT=1 00:01:27.232 SPDK_TEST_NVMF_MDNS=1 00:01:27.232 SPDK_RUN_UBSAN=1 00:01:27.232 NET_TYPE=virt 00:01:27.232 SPDK_JSONRPC_GO_CLIENT=1 00:01:27.232 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:27.232 RUN_NIGHTLY= 16:16:45 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:27.232 16:16:45 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:27.232 16:16:45 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:27.232 16:16:45 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:27.232 16:16:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.232 16:16:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.232 16:16:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.232 16:16:45 -- paths/export.sh@5 -- $ export PATH 00:01:27.232 16:16:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:27.232 16:16:45 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:27.232 16:16:45 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:27.232 16:16:45 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721578605.XXXXXX 00:01:27.489 16:16:45 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721578605.Ce3mJS 00:01:27.489 16:16:45 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:27.489 16:16:45 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:27.489 16:16:45 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:27.489 16:16:45 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:27.489 16:16:45 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:27.489 16:16:45 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:27.489 16:16:45 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:27.489 16:16:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.489 16:16:45 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:01:27.489 16:16:45 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:27.489 16:16:45 -- pm/common@17 -- $ local monitor 00:01:27.489 16:16:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.489 16:16:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:27.489 16:16:45 -- pm/common@25 -- $ sleep 1 00:01:27.489 16:16:45 -- pm/common@21 -- $ date +%s 00:01:27.489 16:16:45 -- pm/common@21 -- $ date +%s 00:01:27.489 16:16:45 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721578605 00:01:27.489 16:16:45 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721578605 00:01:27.489 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721578605_collect-vmstat.pm.log 00:01:27.489 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721578605_collect-cpu-load.pm.log 00:01:28.422 16:16:46 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:28.422 16:16:46 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:28.422 16:16:46 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:28.422 16:16:46 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:28.422 16:16:46 -- spdk/autobuild.sh@16 -- $ date -u 00:01:28.422 Sun Jul 21 04:16:46 PM UTC 2024 00:01:28.422 16:16:46 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:28.422 v24.09-pre-254-g89fd17309 00:01:28.422 16:16:46 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:28.422 16:16:46 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:28.422 16:16:46 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:28.422 16:16:46 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:28.422 16:16:46 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:28.422 16:16:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.422 ************************************ 00:01:28.422 START TEST ubsan 00:01:28.422 ************************************ 00:01:28.422 using ubsan 00:01:28.422 16:16:46 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:28.422 00:01:28.422 real 0m0.000s 00:01:28.422 user 0m0.000s 00:01:28.422 sys 0m0.000s 00:01:28.422 16:16:46 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:28.422 16:16:46 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:28.422 ************************************ 00:01:28.422 END TEST ubsan 00:01:28.422 ************************************ 00:01:28.422 16:16:46 -- common/autotest_common.sh@1142 -- $ return 0 00:01:28.422 16:16:46 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:28.422 16:16:46 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:28.422 16:16:46 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:28.422 16:16:46 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:28.422 16:16:46 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:28.422 16:16:46 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:28.422 16:16:46 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:28.422 16:16:46 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:28.422 16:16:46 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:01:28.679 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:28.679 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:28.937 Using 'verbs' RDMA provider 00:01:44.768 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:56.966 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:56.966 go version go1.21.1 linux/amd64 00:01:56.966 Creating mk/config.mk...done. 00:01:56.966 Creating mk/cc.flags.mk...done. 00:01:56.966 Type 'make' to build. 00:01:56.966 16:17:14 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:01:56.966 16:17:14 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:56.966 16:17:14 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:56.966 16:17:14 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.966 ************************************ 00:01:56.966 START TEST make 00:01:56.966 ************************************ 00:01:56.966 16:17:14 make -- common/autotest_common.sh@1123 -- $ make -j10 00:01:56.966 make[1]: Nothing to be done for 'all'. 00:02:09.176 The Meson build system 00:02:09.176 Version: 1.3.1 00:02:09.176 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:09.176 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:09.176 Build type: native build 00:02:09.176 Program cat found: YES (/usr/bin/cat) 00:02:09.176 Project name: DPDK 00:02:09.176 Project version: 24.03.0 00:02:09.176 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:09.176 C linker for the host machine: cc ld.bfd 2.39-16 00:02:09.176 Host machine cpu family: x86_64 00:02:09.176 Host machine cpu: x86_64 00:02:09.176 Message: ## Building in Developer Mode ## 00:02:09.176 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:09.176 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:09.176 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:09.176 Program python3 found: YES (/usr/bin/python3) 00:02:09.176 Program cat found: YES (/usr/bin/cat) 00:02:09.176 Compiler for C supports arguments -march=native: YES 00:02:09.176 Checking for size of "void *" : 8 00:02:09.176 Checking for size of "void *" : 8 (cached) 00:02:09.176 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:09.176 Library m found: YES 00:02:09.176 Library numa found: YES 00:02:09.176 Has header "numaif.h" : YES 00:02:09.176 Library fdt found: NO 00:02:09.176 Library execinfo found: NO 00:02:09.176 Has header "execinfo.h" : YES 00:02:09.176 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:09.176 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:09.176 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:09.176 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:09.176 Run-time dependency openssl found: YES 3.0.9 00:02:09.176 Run-time dependency libpcap found: YES 1.10.4 00:02:09.176 Has header "pcap.h" with dependency libpcap: YES 00:02:09.176 Compiler for C supports arguments -Wcast-qual: YES 00:02:09.176 Compiler for C supports arguments -Wdeprecated: YES 00:02:09.176 Compiler for C supports arguments -Wformat: YES 00:02:09.176 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:09.176 Compiler for C supports arguments -Wformat-security: NO 00:02:09.176 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:09.176 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:09.176 Compiler for C supports arguments -Wnested-externs: YES 00:02:09.176 Compiler for C supports arguments -Wold-style-definition: YES 00:02:09.176 Compiler for C supports arguments -Wpointer-arith: YES 00:02:09.176 Compiler for C supports arguments -Wsign-compare: YES 00:02:09.176 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:09.176 Compiler for C supports arguments -Wundef: YES 00:02:09.176 Compiler for C supports arguments -Wwrite-strings: YES 00:02:09.176 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:09.176 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:09.176 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:09.176 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:09.176 Program objdump found: YES (/usr/bin/objdump) 00:02:09.176 Compiler for C supports arguments -mavx512f: YES 00:02:09.176 Checking if "AVX512 checking" compiles: YES 00:02:09.176 Fetching value of define "__SSE4_2__" : 1 00:02:09.176 Fetching value of define "__AES__" : 1 00:02:09.176 Fetching value of define "__AVX__" : 1 00:02:09.176 Fetching value of define "__AVX2__" : 1 00:02:09.176 Fetching value of define "__AVX512BW__" : (undefined) 00:02:09.176 Fetching value of define "__AVX512CD__" : (undefined) 00:02:09.176 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:09.176 Fetching value of define "__AVX512F__" : (undefined) 00:02:09.176 Fetching value of define "__AVX512VL__" : (undefined) 00:02:09.176 Fetching value of define "__PCLMUL__" : 1 00:02:09.176 Fetching value of define "__RDRND__" : 1 00:02:09.176 Fetching value of define "__RDSEED__" : 1 00:02:09.176 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:09.176 Fetching value of define "__znver1__" : (undefined) 00:02:09.176 Fetching value of define "__znver2__" : (undefined) 00:02:09.176 Fetching value of define "__znver3__" : (undefined) 00:02:09.176 Fetching value of define "__znver4__" : (undefined) 00:02:09.176 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:09.176 Message: lib/log: Defining dependency "log" 00:02:09.176 Message: lib/kvargs: Defining dependency "kvargs" 00:02:09.176 Message: lib/telemetry: Defining dependency "telemetry" 00:02:09.176 Checking for function "getentropy" : NO 00:02:09.176 Message: lib/eal: Defining dependency "eal" 00:02:09.176 Message: lib/ring: Defining dependency "ring" 00:02:09.176 Message: lib/rcu: Defining dependency "rcu" 00:02:09.176 Message: lib/mempool: Defining dependency "mempool" 00:02:09.176 Message: lib/mbuf: Defining dependency "mbuf" 00:02:09.176 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:09.176 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:09.176 Compiler for C supports arguments -mpclmul: YES 00:02:09.176 Compiler for C supports arguments -maes: YES 00:02:09.176 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:09.176 Compiler for C supports arguments -mavx512bw: YES 00:02:09.176 Compiler for C supports arguments -mavx512dq: YES 00:02:09.176 Compiler for C supports arguments -mavx512vl: YES 00:02:09.176 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:09.176 Compiler for C supports arguments -mavx2: YES 00:02:09.176 Compiler for C supports arguments -mavx: YES 00:02:09.176 Message: lib/net: Defining dependency "net" 00:02:09.176 Message: lib/meter: Defining dependency "meter" 00:02:09.176 Message: lib/ethdev: Defining dependency "ethdev" 00:02:09.176 Message: lib/pci: Defining dependency "pci" 00:02:09.176 Message: lib/cmdline: Defining dependency "cmdline" 00:02:09.176 Message: lib/hash: Defining dependency "hash" 00:02:09.176 Message: lib/timer: Defining dependency "timer" 00:02:09.176 Message: lib/compressdev: Defining dependency "compressdev" 00:02:09.176 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:09.176 Message: lib/dmadev: Defining dependency "dmadev" 00:02:09.176 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:09.176 Message: lib/power: Defining dependency "power" 00:02:09.176 Message: lib/reorder: Defining dependency "reorder" 00:02:09.176 Message: lib/security: Defining dependency "security" 00:02:09.176 Has header "linux/userfaultfd.h" : YES 00:02:09.176 Has header "linux/vduse.h" : YES 00:02:09.176 Message: lib/vhost: Defining dependency "vhost" 00:02:09.176 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:09.176 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:09.176 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:09.176 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:09.176 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:09.176 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:09.176 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:09.176 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:09.176 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:09.176 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:09.176 Program doxygen found: YES (/usr/bin/doxygen) 00:02:09.176 Configuring doxy-api-html.conf using configuration 00:02:09.176 Configuring doxy-api-man.conf using configuration 00:02:09.176 Program mandb found: YES (/usr/bin/mandb) 00:02:09.176 Program sphinx-build found: NO 00:02:09.176 Configuring rte_build_config.h using configuration 00:02:09.176 Message: 00:02:09.176 ================= 00:02:09.176 Applications Enabled 00:02:09.176 ================= 00:02:09.176 00:02:09.176 apps: 00:02:09.176 00:02:09.176 00:02:09.176 Message: 00:02:09.176 ================= 00:02:09.176 Libraries Enabled 00:02:09.176 ================= 00:02:09.176 00:02:09.176 libs: 00:02:09.176 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:09.176 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:09.176 cryptodev, dmadev, power, reorder, security, vhost, 00:02:09.176 00:02:09.176 Message: 00:02:09.176 =============== 00:02:09.176 Drivers Enabled 00:02:09.176 =============== 00:02:09.176 00:02:09.176 common: 00:02:09.176 00:02:09.176 bus: 00:02:09.176 pci, vdev, 00:02:09.176 mempool: 00:02:09.176 ring, 00:02:09.176 dma: 00:02:09.176 00:02:09.176 net: 00:02:09.176 00:02:09.176 crypto: 00:02:09.176 00:02:09.176 compress: 00:02:09.176 00:02:09.176 vdpa: 00:02:09.176 00:02:09.176 00:02:09.177 Message: 00:02:09.177 ================= 00:02:09.177 Content Skipped 00:02:09.177 ================= 00:02:09.177 00:02:09.177 apps: 00:02:09.177 dumpcap: explicitly disabled via build config 00:02:09.177 graph: explicitly disabled via build config 00:02:09.177 pdump: explicitly disabled via build config 00:02:09.177 proc-info: explicitly disabled via build config 00:02:09.177 test-acl: explicitly disabled via build config 00:02:09.177 test-bbdev: explicitly disabled via build config 00:02:09.177 test-cmdline: explicitly disabled via build config 00:02:09.177 test-compress-perf: explicitly disabled via build config 00:02:09.177 test-crypto-perf: explicitly disabled via build config 00:02:09.177 test-dma-perf: explicitly disabled via build config 00:02:09.177 test-eventdev: explicitly disabled via build config 00:02:09.177 test-fib: explicitly disabled via build config 00:02:09.177 test-flow-perf: explicitly disabled via build config 00:02:09.177 test-gpudev: explicitly disabled via build config 00:02:09.177 test-mldev: explicitly disabled via build config 00:02:09.177 test-pipeline: explicitly disabled via build config 00:02:09.177 test-pmd: explicitly disabled via build config 00:02:09.177 test-regex: explicitly disabled via build config 00:02:09.177 test-sad: explicitly disabled via build config 00:02:09.177 test-security-perf: explicitly disabled via build config 00:02:09.177 00:02:09.177 libs: 00:02:09.177 argparse: explicitly disabled via build config 00:02:09.177 metrics: explicitly disabled via build config 00:02:09.177 acl: explicitly disabled via build config 00:02:09.177 bbdev: explicitly disabled via build config 00:02:09.177 bitratestats: explicitly disabled via build config 00:02:09.177 bpf: explicitly disabled via build config 00:02:09.177 cfgfile: explicitly disabled via build config 00:02:09.177 distributor: explicitly disabled via build config 00:02:09.177 efd: explicitly disabled via build config 00:02:09.177 eventdev: explicitly disabled via build config 00:02:09.177 dispatcher: explicitly disabled via build config 00:02:09.177 gpudev: explicitly disabled via build config 00:02:09.177 gro: explicitly disabled via build config 00:02:09.177 gso: explicitly disabled via build config 00:02:09.177 ip_frag: explicitly disabled via build config 00:02:09.177 jobstats: explicitly disabled via build config 00:02:09.177 latencystats: explicitly disabled via build config 00:02:09.177 lpm: explicitly disabled via build config 00:02:09.177 member: explicitly disabled via build config 00:02:09.177 pcapng: explicitly disabled via build config 00:02:09.177 rawdev: explicitly disabled via build config 00:02:09.177 regexdev: explicitly disabled via build config 00:02:09.177 mldev: explicitly disabled via build config 00:02:09.177 rib: explicitly disabled via build config 00:02:09.177 sched: explicitly disabled via build config 00:02:09.177 stack: explicitly disabled via build config 00:02:09.177 ipsec: explicitly disabled via build config 00:02:09.177 pdcp: explicitly disabled via build config 00:02:09.177 fib: explicitly disabled via build config 00:02:09.177 port: explicitly disabled via build config 00:02:09.177 pdump: explicitly disabled via build config 00:02:09.177 table: explicitly disabled via build config 00:02:09.177 pipeline: explicitly disabled via build config 00:02:09.177 graph: explicitly disabled via build config 00:02:09.177 node: explicitly disabled via build config 00:02:09.177 00:02:09.177 drivers: 00:02:09.177 common/cpt: not in enabled drivers build config 00:02:09.177 common/dpaax: not in enabled drivers build config 00:02:09.177 common/iavf: not in enabled drivers build config 00:02:09.177 common/idpf: not in enabled drivers build config 00:02:09.177 common/ionic: not in enabled drivers build config 00:02:09.177 common/mvep: not in enabled drivers build config 00:02:09.177 common/octeontx: not in enabled drivers build config 00:02:09.177 bus/auxiliary: not in enabled drivers build config 00:02:09.177 bus/cdx: not in enabled drivers build config 00:02:09.177 bus/dpaa: not in enabled drivers build config 00:02:09.177 bus/fslmc: not in enabled drivers build config 00:02:09.177 bus/ifpga: not in enabled drivers build config 00:02:09.177 bus/platform: not in enabled drivers build config 00:02:09.177 bus/uacce: not in enabled drivers build config 00:02:09.177 bus/vmbus: not in enabled drivers build config 00:02:09.177 common/cnxk: not in enabled drivers build config 00:02:09.177 common/mlx5: not in enabled drivers build config 00:02:09.177 common/nfp: not in enabled drivers build config 00:02:09.177 common/nitrox: not in enabled drivers build config 00:02:09.177 common/qat: not in enabled drivers build config 00:02:09.177 common/sfc_efx: not in enabled drivers build config 00:02:09.177 mempool/bucket: not in enabled drivers build config 00:02:09.177 mempool/cnxk: not in enabled drivers build config 00:02:09.177 mempool/dpaa: not in enabled drivers build config 00:02:09.177 mempool/dpaa2: not in enabled drivers build config 00:02:09.177 mempool/octeontx: not in enabled drivers build config 00:02:09.177 mempool/stack: not in enabled drivers build config 00:02:09.177 dma/cnxk: not in enabled drivers build config 00:02:09.177 dma/dpaa: not in enabled drivers build config 00:02:09.177 dma/dpaa2: not in enabled drivers build config 00:02:09.177 dma/hisilicon: not in enabled drivers build config 00:02:09.177 dma/idxd: not in enabled drivers build config 00:02:09.177 dma/ioat: not in enabled drivers build config 00:02:09.177 dma/skeleton: not in enabled drivers build config 00:02:09.177 net/af_packet: not in enabled drivers build config 00:02:09.177 net/af_xdp: not in enabled drivers build config 00:02:09.177 net/ark: not in enabled drivers build config 00:02:09.177 net/atlantic: not in enabled drivers build config 00:02:09.177 net/avp: not in enabled drivers build config 00:02:09.177 net/axgbe: not in enabled drivers build config 00:02:09.177 net/bnx2x: not in enabled drivers build config 00:02:09.177 net/bnxt: not in enabled drivers build config 00:02:09.177 net/bonding: not in enabled drivers build config 00:02:09.177 net/cnxk: not in enabled drivers build config 00:02:09.177 net/cpfl: not in enabled drivers build config 00:02:09.177 net/cxgbe: not in enabled drivers build config 00:02:09.177 net/dpaa: not in enabled drivers build config 00:02:09.177 net/dpaa2: not in enabled drivers build config 00:02:09.177 net/e1000: not in enabled drivers build config 00:02:09.177 net/ena: not in enabled drivers build config 00:02:09.177 net/enetc: not in enabled drivers build config 00:02:09.177 net/enetfec: not in enabled drivers build config 00:02:09.177 net/enic: not in enabled drivers build config 00:02:09.177 net/failsafe: not in enabled drivers build config 00:02:09.177 net/fm10k: not in enabled drivers build config 00:02:09.177 net/gve: not in enabled drivers build config 00:02:09.177 net/hinic: not in enabled drivers build config 00:02:09.177 net/hns3: not in enabled drivers build config 00:02:09.177 net/i40e: not in enabled drivers build config 00:02:09.177 net/iavf: not in enabled drivers build config 00:02:09.177 net/ice: not in enabled drivers build config 00:02:09.177 net/idpf: not in enabled drivers build config 00:02:09.177 net/igc: not in enabled drivers build config 00:02:09.177 net/ionic: not in enabled drivers build config 00:02:09.177 net/ipn3ke: not in enabled drivers build config 00:02:09.177 net/ixgbe: not in enabled drivers build config 00:02:09.177 net/mana: not in enabled drivers build config 00:02:09.177 net/memif: not in enabled drivers build config 00:02:09.177 net/mlx4: not in enabled drivers build config 00:02:09.177 net/mlx5: not in enabled drivers build config 00:02:09.177 net/mvneta: not in enabled drivers build config 00:02:09.177 net/mvpp2: not in enabled drivers build config 00:02:09.177 net/netvsc: not in enabled drivers build config 00:02:09.177 net/nfb: not in enabled drivers build config 00:02:09.177 net/nfp: not in enabled drivers build config 00:02:09.177 net/ngbe: not in enabled drivers build config 00:02:09.177 net/null: not in enabled drivers build config 00:02:09.177 net/octeontx: not in enabled drivers build config 00:02:09.177 net/octeon_ep: not in enabled drivers build config 00:02:09.177 net/pcap: not in enabled drivers build config 00:02:09.177 net/pfe: not in enabled drivers build config 00:02:09.177 net/qede: not in enabled drivers build config 00:02:09.177 net/ring: not in enabled drivers build config 00:02:09.177 net/sfc: not in enabled drivers build config 00:02:09.177 net/softnic: not in enabled drivers build config 00:02:09.177 net/tap: not in enabled drivers build config 00:02:09.177 net/thunderx: not in enabled drivers build config 00:02:09.177 net/txgbe: not in enabled drivers build config 00:02:09.177 net/vdev_netvsc: not in enabled drivers build config 00:02:09.177 net/vhost: not in enabled drivers build config 00:02:09.177 net/virtio: not in enabled drivers build config 00:02:09.177 net/vmxnet3: not in enabled drivers build config 00:02:09.177 raw/*: missing internal dependency, "rawdev" 00:02:09.177 crypto/armv8: not in enabled drivers build config 00:02:09.177 crypto/bcmfs: not in enabled drivers build config 00:02:09.177 crypto/caam_jr: not in enabled drivers build config 00:02:09.177 crypto/ccp: not in enabled drivers build config 00:02:09.177 crypto/cnxk: not in enabled drivers build config 00:02:09.177 crypto/dpaa_sec: not in enabled drivers build config 00:02:09.177 crypto/dpaa2_sec: not in enabled drivers build config 00:02:09.177 crypto/ipsec_mb: not in enabled drivers build config 00:02:09.177 crypto/mlx5: not in enabled drivers build config 00:02:09.177 crypto/mvsam: not in enabled drivers build config 00:02:09.177 crypto/nitrox: not in enabled drivers build config 00:02:09.177 crypto/null: not in enabled drivers build config 00:02:09.177 crypto/octeontx: not in enabled drivers build config 00:02:09.177 crypto/openssl: not in enabled drivers build config 00:02:09.177 crypto/scheduler: not in enabled drivers build config 00:02:09.177 crypto/uadk: not in enabled drivers build config 00:02:09.177 crypto/virtio: not in enabled drivers build config 00:02:09.177 compress/isal: not in enabled drivers build config 00:02:09.177 compress/mlx5: not in enabled drivers build config 00:02:09.177 compress/nitrox: not in enabled drivers build config 00:02:09.177 compress/octeontx: not in enabled drivers build config 00:02:09.177 compress/zlib: not in enabled drivers build config 00:02:09.177 regex/*: missing internal dependency, "regexdev" 00:02:09.177 ml/*: missing internal dependency, "mldev" 00:02:09.177 vdpa/ifc: not in enabled drivers build config 00:02:09.177 vdpa/mlx5: not in enabled drivers build config 00:02:09.177 vdpa/nfp: not in enabled drivers build config 00:02:09.177 vdpa/sfc: not in enabled drivers build config 00:02:09.177 event/*: missing internal dependency, "eventdev" 00:02:09.177 baseband/*: missing internal dependency, "bbdev" 00:02:09.177 gpu/*: missing internal dependency, "gpudev" 00:02:09.177 00:02:09.177 00:02:09.177 Build targets in project: 85 00:02:09.177 00:02:09.177 DPDK 24.03.0 00:02:09.177 00:02:09.177 User defined options 00:02:09.177 buildtype : debug 00:02:09.177 default_library : shared 00:02:09.177 libdir : lib 00:02:09.177 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:09.177 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:09.177 c_link_args : 00:02:09.177 cpu_instruction_set: native 00:02:09.178 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:09.178 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:09.178 enable_docs : false 00:02:09.178 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:09.178 enable_kmods : false 00:02:09.178 max_lcores : 128 00:02:09.178 tests : false 00:02:09.178 00:02:09.178 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:09.178 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:09.178 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:09.178 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:09.178 [3/268] Linking static target lib/librte_kvargs.a 00:02:09.178 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:09.178 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:09.178 [6/268] Linking static target lib/librte_log.a 00:02:09.178 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.178 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:09.178 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:09.178 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:09.178 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:09.178 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:09.178 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:09.178 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:09.178 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:09.178 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:09.178 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:09.178 [18/268] Linking static target lib/librte_telemetry.a 00:02:09.178 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.178 [20/268] Linking target lib/librte_log.so.24.1 00:02:09.446 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:09.704 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:09.704 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:09.704 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:09.704 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:09.704 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:09.704 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:09.962 [28/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:09.962 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:09.962 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:09.962 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:09.962 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:10.221 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.221 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:10.221 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:10.479 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:10.479 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:10.736 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:10.736 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:10.736 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:10.736 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:10.736 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:10.994 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:10.994 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:10.994 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:10.994 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:10.994 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:11.252 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:11.252 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:11.252 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:11.836 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:11.836 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:11.836 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:11.836 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:11.836 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:11.836 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:12.094 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:12.094 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:12.094 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:12.352 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:12.352 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:12.609 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:12.609 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:12.609 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:12.867 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:12.867 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:12.867 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:13.125 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:13.125 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:13.125 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:13.125 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:13.383 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:13.641 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:13.641 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:13.641 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:13.899 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:13.899 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:13.899 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:13.899 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:13.899 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:13.899 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:14.155 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:14.155 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:14.413 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:14.672 [85/268] Linking static target lib/librte_eal.a 00:02:14.672 [86/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:14.672 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:14.672 [88/268] Linking static target lib/librte_ring.a 00:02:14.935 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:14.935 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:14.935 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:14.935 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:15.192 [93/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:15.193 [94/268] Linking static target lib/librte_rcu.a 00:02:15.193 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:15.193 [96/268] Linking static target lib/librte_mempool.a 00:02:15.450 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.450 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:15.450 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:15.707 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:15.707 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.707 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:15.707 [103/268] Linking static target lib/librte_mbuf.a 00:02:15.965 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:15.965 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:15.965 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:16.530 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:16.530 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:16.530 [109/268] Linking static target lib/librte_net.a 00:02:16.530 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.788 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:16.788 [112/268] Linking static target lib/librte_meter.a 00:02:16.788 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:16.788 [114/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.788 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:17.045 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:17.045 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.303 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.303 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:17.869 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:18.126 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:18.384 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:18.384 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:18.384 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:18.384 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:18.384 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:18.384 [127/268] Linking static target lib/librte_pci.a 00:02:18.641 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:18.641 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:18.899 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:18.899 [131/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.899 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:19.157 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:19.157 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:19.157 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:19.157 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:19.157 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:19.157 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:19.157 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:19.157 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:19.414 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:19.414 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:19.414 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:19.414 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:19.671 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:19.929 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:19.929 [147/268] Linking static target lib/librte_ethdev.a 00:02:19.929 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:19.929 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:19.929 [150/268] Linking static target lib/librte_cmdline.a 00:02:20.187 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:20.187 [152/268] Linking static target lib/librte_timer.a 00:02:20.187 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:20.444 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:20.444 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:20.444 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:20.444 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:20.701 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:20.701 [159/268] Linking static target lib/librte_hash.a 00:02:20.701 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.958 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:20.958 [162/268] Linking static target lib/librte_compressdev.a 00:02:20.958 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:21.215 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:21.215 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:21.215 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:21.472 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:21.473 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:21.473 [169/268] Linking static target lib/librte_dmadev.a 00:02:21.730 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.730 [171/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:21.730 [172/268] Linking static target lib/librte_cryptodev.a 00:02:21.730 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:21.730 [174/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:21.730 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:21.988 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.988 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.988 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:22.246 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.513 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:22.513 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:22.513 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:22.513 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:22.770 [184/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:22.770 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:22.770 [186/268] Linking static target lib/librte_power.a 00:02:22.770 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:22.770 [188/268] Linking static target lib/librte_reorder.a 00:02:23.334 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:23.334 [190/268] Linking static target lib/librte_security.a 00:02:23.334 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:23.334 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:23.334 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:23.590 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.590 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:24.155 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.155 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.155 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:24.155 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:24.413 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:24.413 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:24.413 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.978 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:24.978 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:24.978 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:24.978 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:25.240 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:25.240 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:25.240 [209/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:25.240 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:25.240 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:25.240 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:25.504 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:25.504 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:25.504 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:25.504 [216/268] Linking static target drivers/librte_bus_vdev.a 00:02:25.504 [217/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:25.504 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:25.504 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:25.504 [220/268] Linking static target drivers/librte_bus_pci.a 00:02:25.761 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:25.761 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:25.761 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.761 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:25.761 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:25.761 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:25.761 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:26.019 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.952 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:26.952 [230/268] Linking static target lib/librte_vhost.a 00:02:27.210 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.468 [232/268] Linking target lib/librte_eal.so.24.1 00:02:27.468 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:27.726 [234/268] Linking target lib/librte_timer.so.24.1 00:02:27.726 [235/268] Linking target lib/librte_pci.so.24.1 00:02:27.726 [236/268] Linking target lib/librte_ring.so.24.1 00:02:27.726 [237/268] Linking target lib/librte_meter.so.24.1 00:02:27.726 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:27.726 [239/268] Linking target lib/librte_dmadev.so.24.1 00:02:27.726 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:27.726 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:27.726 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:27.726 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:27.726 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:27.726 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:27.983 [246/268] Linking target lib/librte_rcu.so.24.1 00:02:27.983 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:27.983 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:27.983 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:27.983 [250/268] Linking target lib/librte_mbuf.so.24.1 00:02:28.241 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:28.241 [252/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:28.241 [253/268] Linking target lib/librte_net.so.24.1 00:02:28.241 [254/268] Linking target lib/librte_reorder.so.24.1 00:02:28.241 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:28.241 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:02:28.241 [257/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.498 [258/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:28.498 [259/268] Linking target lib/librte_hash.so.24.1 00:02:28.498 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:28.498 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:28.498 [262/268] Linking target lib/librte_security.so.24.1 00:02:28.498 [263/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.756 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:28.756 [265/268] Linking target lib/librte_ethdev.so.24.1 00:02:28.756 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:29.014 [267/268] Linking target lib/librte_power.so.24.1 00:02:29.014 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:29.014 INFO: autodetecting backend as ninja 00:02:29.014 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:30.913 CC lib/ut/ut.o 00:02:30.913 CC lib/log/log.o 00:02:30.913 CC lib/log/log_flags.o 00:02:30.913 CC lib/log/log_deprecated.o 00:02:30.913 CC lib/ut_mock/mock.o 00:02:30.913 LIB libspdk_ut_mock.a 00:02:30.913 LIB libspdk_ut.a 00:02:30.913 LIB libspdk_log.a 00:02:30.913 SO libspdk_ut_mock.so.6.0 00:02:30.913 SO libspdk_ut.so.2.0 00:02:30.913 SO libspdk_log.so.7.0 00:02:30.913 SYMLINK libspdk_ut_mock.so 00:02:31.171 SYMLINK libspdk_ut.so 00:02:31.171 SYMLINK libspdk_log.so 00:02:31.171 CC lib/ioat/ioat.o 00:02:31.171 CC lib/util/base64.o 00:02:31.171 CC lib/util/cpuset.o 00:02:31.171 CC lib/util/bit_array.o 00:02:31.171 CC lib/util/crc16.o 00:02:31.171 CC lib/util/crc32.o 00:02:31.171 CC lib/dma/dma.o 00:02:31.171 CC lib/util/crc32c.o 00:02:31.171 CXX lib/trace_parser/trace.o 00:02:31.428 CC lib/vfio_user/host/vfio_user_pci.o 00:02:31.428 CC lib/util/crc32_ieee.o 00:02:31.428 CC lib/util/crc64.o 00:02:31.428 CC lib/util/dif.o 00:02:31.428 LIB libspdk_dma.a 00:02:31.428 CC lib/util/fd.o 00:02:31.428 CC lib/util/fd_group.o 00:02:31.428 SO libspdk_dma.so.4.0 00:02:31.428 SYMLINK libspdk_dma.so 00:02:31.686 CC lib/vfio_user/host/vfio_user.o 00:02:31.686 CC lib/util/file.o 00:02:31.686 CC lib/util/hexlify.o 00:02:31.686 LIB libspdk_ioat.a 00:02:31.686 CC lib/util/iov.o 00:02:31.686 SO libspdk_ioat.so.7.0 00:02:31.686 CC lib/util/math.o 00:02:31.686 CC lib/util/net.o 00:02:31.686 CC lib/util/pipe.o 00:02:31.686 SYMLINK libspdk_ioat.so 00:02:31.686 CC lib/util/strerror_tls.o 00:02:31.686 CC lib/util/string.o 00:02:31.686 CC lib/util/uuid.o 00:02:31.686 LIB libspdk_vfio_user.a 00:02:31.686 CC lib/util/xor.o 00:02:31.944 CC lib/util/zipf.o 00:02:31.944 SO libspdk_vfio_user.so.5.0 00:02:31.944 SYMLINK libspdk_vfio_user.so 00:02:31.944 LIB libspdk_util.a 00:02:32.203 SO libspdk_util.so.10.0 00:02:32.203 LIB libspdk_trace_parser.a 00:02:32.462 SYMLINK libspdk_util.so 00:02:32.462 SO libspdk_trace_parser.so.5.0 00:02:32.462 SYMLINK libspdk_trace_parser.so 00:02:32.462 CC lib/idxd/idxd.o 00:02:32.462 CC lib/json/json_parse.o 00:02:32.462 CC lib/idxd/idxd_user.o 00:02:32.462 CC lib/json/json_util.o 00:02:32.462 CC lib/json/json_write.o 00:02:32.462 CC lib/rdma_utils/rdma_utils.o 00:02:32.462 CC lib/env_dpdk/env.o 00:02:32.462 CC lib/conf/conf.o 00:02:32.462 CC lib/vmd/vmd.o 00:02:32.462 CC lib/rdma_provider/common.o 00:02:32.721 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:32.721 LIB libspdk_conf.a 00:02:32.721 CC lib/vmd/led.o 00:02:32.721 CC lib/env_dpdk/memory.o 00:02:32.721 SO libspdk_conf.so.6.0 00:02:32.721 LIB libspdk_rdma_utils.a 00:02:32.721 CC lib/idxd/idxd_kernel.o 00:02:32.721 LIB libspdk_json.a 00:02:32.978 SO libspdk_rdma_utils.so.1.0 00:02:32.978 SYMLINK libspdk_conf.so 00:02:32.978 SO libspdk_json.so.6.0 00:02:32.978 CC lib/env_dpdk/pci.o 00:02:32.978 SYMLINK libspdk_rdma_utils.so 00:02:32.978 CC lib/env_dpdk/init.o 00:02:32.978 LIB libspdk_rdma_provider.a 00:02:32.978 CC lib/env_dpdk/threads.o 00:02:32.978 SYMLINK libspdk_json.so 00:02:32.978 CC lib/env_dpdk/pci_ioat.o 00:02:32.978 SO libspdk_rdma_provider.so.6.0 00:02:32.978 SYMLINK libspdk_rdma_provider.so 00:02:32.978 CC lib/env_dpdk/pci_virtio.o 00:02:32.978 CC lib/env_dpdk/pci_vmd.o 00:02:32.978 CC lib/env_dpdk/pci_idxd.o 00:02:33.236 LIB libspdk_idxd.a 00:02:33.236 SO libspdk_idxd.so.12.0 00:02:33.236 LIB libspdk_vmd.a 00:02:33.236 CC lib/env_dpdk/pci_event.o 00:02:33.236 CC lib/jsonrpc/jsonrpc_server.o 00:02:33.236 CC lib/env_dpdk/sigbus_handler.o 00:02:33.236 CC lib/env_dpdk/pci_dpdk.o 00:02:33.236 SO libspdk_vmd.so.6.0 00:02:33.236 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:33.236 SYMLINK libspdk_idxd.so 00:02:33.236 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:33.236 SYMLINK libspdk_vmd.so 00:02:33.236 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:33.236 CC lib/jsonrpc/jsonrpc_client.o 00:02:33.236 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:33.495 LIB libspdk_jsonrpc.a 00:02:33.495 SO libspdk_jsonrpc.so.6.0 00:02:33.753 SYMLINK libspdk_jsonrpc.so 00:02:34.011 LIB libspdk_env_dpdk.a 00:02:34.011 CC lib/rpc/rpc.o 00:02:34.011 SO libspdk_env_dpdk.so.14.1 00:02:34.270 LIB libspdk_rpc.a 00:02:34.270 SYMLINK libspdk_env_dpdk.so 00:02:34.270 SO libspdk_rpc.so.6.0 00:02:34.270 SYMLINK libspdk_rpc.so 00:02:34.528 CC lib/trace/trace.o 00:02:34.528 CC lib/trace/trace_rpc.o 00:02:34.528 CC lib/trace/trace_flags.o 00:02:34.528 CC lib/notify/notify_rpc.o 00:02:34.528 CC lib/notify/notify.o 00:02:34.528 CC lib/keyring/keyring.o 00:02:34.528 CC lib/keyring/keyring_rpc.o 00:02:34.786 LIB libspdk_notify.a 00:02:34.786 SO libspdk_notify.so.6.0 00:02:34.786 LIB libspdk_keyring.a 00:02:34.786 LIB libspdk_trace.a 00:02:34.786 SYMLINK libspdk_notify.so 00:02:34.786 SO libspdk_keyring.so.1.0 00:02:34.786 SO libspdk_trace.so.10.0 00:02:35.044 SYMLINK libspdk_keyring.so 00:02:35.044 SYMLINK libspdk_trace.so 00:02:35.301 CC lib/thread/thread.o 00:02:35.301 CC lib/sock/sock.o 00:02:35.301 CC lib/sock/sock_rpc.o 00:02:35.301 CC lib/thread/iobuf.o 00:02:35.866 LIB libspdk_sock.a 00:02:35.866 SO libspdk_sock.so.10.0 00:02:35.866 SYMLINK libspdk_sock.so 00:02:36.142 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:36.142 CC lib/nvme/nvme_ctrlr.o 00:02:36.142 CC lib/nvme/nvme_ns_cmd.o 00:02:36.142 CC lib/nvme/nvme_fabric.o 00:02:36.142 CC lib/nvme/nvme_ns.o 00:02:36.142 CC lib/nvme/nvme_pcie_common.o 00:02:36.142 CC lib/nvme/nvme_pcie.o 00:02:36.142 CC lib/nvme/nvme.o 00:02:36.142 CC lib/nvme/nvme_qpair.o 00:02:36.705 LIB libspdk_thread.a 00:02:36.705 SO libspdk_thread.so.10.1 00:02:36.963 CC lib/nvme/nvme_quirks.o 00:02:36.963 SYMLINK libspdk_thread.so 00:02:36.963 CC lib/nvme/nvme_transport.o 00:02:36.963 CC lib/nvme/nvme_discovery.o 00:02:36.963 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:36.963 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:36.963 CC lib/nvme/nvme_tcp.o 00:02:36.963 CC lib/nvme/nvme_opal.o 00:02:37.220 CC lib/nvme/nvme_io_msg.o 00:02:37.220 CC lib/nvme/nvme_poll_group.o 00:02:37.478 CC lib/nvme/nvme_zns.o 00:02:37.478 CC lib/nvme/nvme_stubs.o 00:02:37.735 CC lib/nvme/nvme_auth.o 00:02:37.735 CC lib/accel/accel.o 00:02:37.735 CC lib/blob/blobstore.o 00:02:37.735 CC lib/init/json_config.o 00:02:37.993 CC lib/blob/request.o 00:02:37.993 CC lib/virtio/virtio.o 00:02:37.993 CC lib/init/subsystem.o 00:02:38.250 CC lib/init/subsystem_rpc.o 00:02:38.250 CC lib/init/rpc.o 00:02:38.250 CC lib/virtio/virtio_vhost_user.o 00:02:38.250 CC lib/virtio/virtio_vfio_user.o 00:02:38.250 CC lib/virtio/virtio_pci.o 00:02:38.250 CC lib/nvme/nvme_cuse.o 00:02:38.250 LIB libspdk_init.a 00:02:38.250 CC lib/blob/zeroes.o 00:02:38.250 SO libspdk_init.so.5.0 00:02:38.550 SYMLINK libspdk_init.so 00:02:38.550 CC lib/blob/blob_bs_dev.o 00:02:38.550 CC lib/accel/accel_rpc.o 00:02:38.550 CC lib/accel/accel_sw.o 00:02:38.550 CC lib/nvme/nvme_rdma.o 00:02:38.550 LIB libspdk_virtio.a 00:02:38.550 SO libspdk_virtio.so.7.0 00:02:38.807 SYMLINK libspdk_virtio.so 00:02:38.807 CC lib/event/app.o 00:02:38.807 CC lib/event/reactor.o 00:02:38.807 CC lib/event/log_rpc.o 00:02:38.807 CC lib/event/scheduler_static.o 00:02:38.807 CC lib/event/app_rpc.o 00:02:38.807 LIB libspdk_accel.a 00:02:38.807 SO libspdk_accel.so.16.0 00:02:39.064 SYMLINK libspdk_accel.so 00:02:39.064 LIB libspdk_event.a 00:02:39.064 CC lib/bdev/bdev.o 00:02:39.064 CC lib/bdev/bdev_rpc.o 00:02:39.064 CC lib/bdev/part.o 00:02:39.064 CC lib/bdev/bdev_zone.o 00:02:39.064 CC lib/bdev/scsi_nvme.o 00:02:39.322 SO libspdk_event.so.14.0 00:02:39.322 SYMLINK libspdk_event.so 00:02:40.253 LIB libspdk_nvme.a 00:02:40.253 SO libspdk_nvme.so.13.1 00:02:40.510 SYMLINK libspdk_nvme.so 00:02:40.767 LIB libspdk_blob.a 00:02:41.025 SO libspdk_blob.so.11.0 00:02:41.025 SYMLINK libspdk_blob.so 00:02:41.283 CC lib/lvol/lvol.o 00:02:41.283 CC lib/blobfs/blobfs.o 00:02:41.283 CC lib/blobfs/tree.o 00:02:41.849 LIB libspdk_bdev.a 00:02:42.107 SO libspdk_bdev.so.16.0 00:02:42.107 SYMLINK libspdk_bdev.so 00:02:42.107 LIB libspdk_blobfs.a 00:02:42.402 SO libspdk_blobfs.so.10.0 00:02:42.402 CC lib/scsi/dev.o 00:02:42.402 CC lib/nbd/nbd.o 00:02:42.402 CC lib/nbd/nbd_rpc.o 00:02:42.402 CC lib/scsi/lun.o 00:02:42.402 CC lib/nvmf/ctrlr.o 00:02:42.402 CC lib/scsi/port.o 00:02:42.402 CC lib/ublk/ublk.o 00:02:42.402 CC lib/ftl/ftl_core.o 00:02:42.402 SYMLINK libspdk_blobfs.so 00:02:42.402 LIB libspdk_lvol.a 00:02:42.402 CC lib/ublk/ublk_rpc.o 00:02:42.402 SO libspdk_lvol.so.10.0 00:02:42.402 SYMLINK libspdk_lvol.so 00:02:42.402 CC lib/nvmf/ctrlr_discovery.o 00:02:42.661 CC lib/scsi/scsi.o 00:02:42.661 CC lib/scsi/scsi_bdev.o 00:02:42.661 CC lib/scsi/scsi_pr.o 00:02:42.661 CC lib/nvmf/ctrlr_bdev.o 00:02:42.661 CC lib/nvmf/subsystem.o 00:02:42.662 CC lib/nvmf/nvmf.o 00:02:42.920 CC lib/ftl/ftl_init.o 00:02:42.920 LIB libspdk_nbd.a 00:02:42.920 SO libspdk_nbd.so.7.0 00:02:42.920 CC lib/ftl/ftl_layout.o 00:02:42.920 SYMLINK libspdk_nbd.so 00:02:42.920 CC lib/ftl/ftl_debug.o 00:02:42.920 LIB libspdk_ublk.a 00:02:42.920 CC lib/ftl/ftl_io.o 00:02:42.920 SO libspdk_ublk.so.3.0 00:02:42.920 CC lib/nvmf/nvmf_rpc.o 00:02:42.920 CC lib/scsi/scsi_rpc.o 00:02:43.178 SYMLINK libspdk_ublk.so 00:02:43.178 CC lib/scsi/task.o 00:02:43.178 CC lib/ftl/ftl_sb.o 00:02:43.178 CC lib/ftl/ftl_l2p.o 00:02:43.178 CC lib/ftl/ftl_l2p_flat.o 00:02:43.178 CC lib/nvmf/transport.o 00:02:43.178 CC lib/ftl/ftl_nv_cache.o 00:02:43.178 LIB libspdk_scsi.a 00:02:43.437 SO libspdk_scsi.so.9.0 00:02:43.437 CC lib/nvmf/tcp.o 00:02:43.437 CC lib/ftl/ftl_band.o 00:02:43.437 SYMLINK libspdk_scsi.so 00:02:43.437 CC lib/ftl/ftl_band_ops.o 00:02:43.437 CC lib/nvmf/stubs.o 00:02:43.695 CC lib/nvmf/mdns_server.o 00:02:43.953 CC lib/ftl/ftl_writer.o 00:02:43.953 CC lib/nvmf/rdma.o 00:02:43.953 CC lib/nvmf/auth.o 00:02:43.953 CC lib/ftl/ftl_rq.o 00:02:43.953 CC lib/ftl/ftl_reloc.o 00:02:44.210 CC lib/ftl/ftl_l2p_cache.o 00:02:44.210 CC lib/iscsi/conn.o 00:02:44.210 CC lib/iscsi/init_grp.o 00:02:44.210 CC lib/iscsi/iscsi.o 00:02:44.210 CC lib/vhost/vhost.o 00:02:44.467 CC lib/ftl/ftl_p2l.o 00:02:44.467 CC lib/iscsi/md5.o 00:02:44.467 CC lib/iscsi/param.o 00:02:44.467 CC lib/ftl/mngt/ftl_mngt.o 00:02:44.743 CC lib/vhost/vhost_rpc.o 00:02:44.743 CC lib/vhost/vhost_scsi.o 00:02:44.743 CC lib/vhost/vhost_blk.o 00:02:44.743 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:44.743 CC lib/iscsi/portal_grp.o 00:02:45.001 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:45.001 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:45.001 CC lib/vhost/rte_vhost_user.o 00:02:45.001 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:45.001 CC lib/iscsi/tgt_node.o 00:02:45.001 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:45.257 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:45.257 CC lib/iscsi/iscsi_subsystem.o 00:02:45.257 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:45.257 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:45.257 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:45.514 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:45.514 CC lib/iscsi/iscsi_rpc.o 00:02:45.514 CC lib/iscsi/task.o 00:02:45.514 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:45.771 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:45.771 CC lib/ftl/utils/ftl_conf.o 00:02:45.771 CC lib/ftl/utils/ftl_md.o 00:02:45.771 CC lib/ftl/utils/ftl_mempool.o 00:02:45.771 CC lib/ftl/utils/ftl_bitmap.o 00:02:45.771 CC lib/ftl/utils/ftl_property.o 00:02:45.771 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:46.028 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:46.028 LIB libspdk_iscsi.a 00:02:46.028 LIB libspdk_nvmf.a 00:02:46.028 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:46.028 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:46.028 SO libspdk_iscsi.so.8.0 00:02:46.028 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:46.028 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:46.028 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:46.028 SO libspdk_nvmf.so.19.0 00:02:46.285 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:46.285 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:46.285 SYMLINK libspdk_iscsi.so 00:02:46.285 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:46.285 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:46.285 CC lib/ftl/base/ftl_base_dev.o 00:02:46.285 LIB libspdk_vhost.a 00:02:46.285 CC lib/ftl/base/ftl_base_bdev.o 00:02:46.285 CC lib/ftl/ftl_trace.o 00:02:46.285 SO libspdk_vhost.so.8.0 00:02:46.285 SYMLINK libspdk_nvmf.so 00:02:46.542 SYMLINK libspdk_vhost.so 00:02:46.542 LIB libspdk_ftl.a 00:02:46.798 SO libspdk_ftl.so.9.0 00:02:47.056 SYMLINK libspdk_ftl.so 00:02:47.620 CC module/env_dpdk/env_dpdk_rpc.o 00:02:47.620 CC module/keyring/file/keyring.o 00:02:47.620 CC module/accel/error/accel_error.o 00:02:47.620 CC module/accel/ioat/accel_ioat.o 00:02:47.620 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:47.620 CC module/sock/posix/posix.o 00:02:47.620 CC module/scheduler/gscheduler/gscheduler.o 00:02:47.620 CC module/accel/dsa/accel_dsa.o 00:02:47.620 CC module/blob/bdev/blob_bdev.o 00:02:47.620 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:47.620 LIB libspdk_env_dpdk_rpc.a 00:02:47.876 SO libspdk_env_dpdk_rpc.so.6.0 00:02:47.876 LIB libspdk_scheduler_dpdk_governor.a 00:02:47.876 LIB libspdk_scheduler_gscheduler.a 00:02:47.876 CC module/keyring/file/keyring_rpc.o 00:02:47.876 SYMLINK libspdk_env_dpdk_rpc.so 00:02:47.876 CC module/accel/error/accel_error_rpc.o 00:02:47.876 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:47.876 SO libspdk_scheduler_gscheduler.so.4.0 00:02:47.876 CC module/accel/ioat/accel_ioat_rpc.o 00:02:47.876 LIB libspdk_scheduler_dynamic.a 00:02:47.876 SYMLINK libspdk_scheduler_gscheduler.so 00:02:47.876 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:47.876 CC module/accel/dsa/accel_dsa_rpc.o 00:02:47.876 LIB libspdk_blob_bdev.a 00:02:47.876 SO libspdk_scheduler_dynamic.so.4.0 00:02:47.876 SO libspdk_blob_bdev.so.11.0 00:02:47.876 LIB libspdk_keyring_file.a 00:02:47.876 LIB libspdk_accel_error.a 00:02:47.876 SYMLINK libspdk_scheduler_dynamic.so 00:02:48.133 SO libspdk_keyring_file.so.1.0 00:02:48.133 SYMLINK libspdk_blob_bdev.so 00:02:48.133 LIB libspdk_accel_ioat.a 00:02:48.133 SO libspdk_accel_error.so.2.0 00:02:48.133 CC module/keyring/linux/keyring.o 00:02:48.133 CC module/keyring/linux/keyring_rpc.o 00:02:48.133 SO libspdk_accel_ioat.so.6.0 00:02:48.133 SYMLINK libspdk_keyring_file.so 00:02:48.133 LIB libspdk_accel_dsa.a 00:02:48.133 CC module/accel/iaa/accel_iaa.o 00:02:48.133 CC module/accel/iaa/accel_iaa_rpc.o 00:02:48.133 SYMLINK libspdk_accel_error.so 00:02:48.133 SO libspdk_accel_dsa.so.5.0 00:02:48.133 SYMLINK libspdk_accel_ioat.so 00:02:48.133 SYMLINK libspdk_accel_dsa.so 00:02:48.133 LIB libspdk_keyring_linux.a 00:02:48.133 SO libspdk_keyring_linux.so.1.0 00:02:48.390 CC module/bdev/delay/vbdev_delay.o 00:02:48.390 CC module/blobfs/bdev/blobfs_bdev.o 00:02:48.390 CC module/bdev/error/vbdev_error.o 00:02:48.390 SYMLINK libspdk_keyring_linux.so 00:02:48.390 LIB libspdk_accel_iaa.a 00:02:48.390 CC module/bdev/error/vbdev_error_rpc.o 00:02:48.390 CC module/bdev/gpt/gpt.o 00:02:48.390 SO libspdk_accel_iaa.so.3.0 00:02:48.390 CC module/bdev/lvol/vbdev_lvol.o 00:02:48.390 CC module/bdev/malloc/bdev_malloc.o 00:02:48.390 LIB libspdk_sock_posix.a 00:02:48.390 SYMLINK libspdk_accel_iaa.so 00:02:48.390 SO libspdk_sock_posix.so.6.0 00:02:48.390 CC module/bdev/gpt/vbdev_gpt.o 00:02:48.390 CC module/bdev/null/bdev_null.o 00:02:48.390 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:48.390 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:48.647 SYMLINK libspdk_sock_posix.so 00:02:48.647 LIB libspdk_bdev_error.a 00:02:48.647 SO libspdk_bdev_error.so.6.0 00:02:48.647 LIB libspdk_blobfs_bdev.a 00:02:48.647 LIB libspdk_bdev_delay.a 00:02:48.647 CC module/bdev/nvme/bdev_nvme.o 00:02:48.647 LIB libspdk_bdev_gpt.a 00:02:48.647 SYMLINK libspdk_bdev_error.so 00:02:48.647 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:48.647 SO libspdk_blobfs_bdev.so.6.0 00:02:48.647 SO libspdk_bdev_delay.so.6.0 00:02:48.905 CC module/bdev/null/bdev_null_rpc.o 00:02:48.905 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:48.905 SO libspdk_bdev_gpt.so.6.0 00:02:48.905 CC module/bdev/passthru/vbdev_passthru.o 00:02:48.905 SYMLINK libspdk_blobfs_bdev.so 00:02:48.905 SYMLINK libspdk_bdev_delay.so 00:02:48.905 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:48.905 SYMLINK libspdk_bdev_gpt.so 00:02:48.905 CC module/bdev/nvme/nvme_rpc.o 00:02:48.905 CC module/bdev/nvme/bdev_mdns_client.o 00:02:48.905 CC module/bdev/raid/bdev_raid.o 00:02:48.905 LIB libspdk_bdev_malloc.a 00:02:48.905 LIB libspdk_bdev_null.a 00:02:48.905 SO libspdk_bdev_malloc.so.6.0 00:02:48.905 SO libspdk_bdev_null.so.6.0 00:02:49.162 CC module/bdev/split/vbdev_split.o 00:02:49.162 SYMLINK libspdk_bdev_null.so 00:02:49.162 SYMLINK libspdk_bdev_malloc.so 00:02:49.162 CC module/bdev/nvme/vbdev_opal.o 00:02:49.162 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:49.162 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:49.162 CC module/bdev/split/vbdev_split_rpc.o 00:02:49.162 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:49.162 LIB libspdk_bdev_lvol.a 00:02:49.162 SO libspdk_bdev_lvol.so.6.0 00:02:49.420 LIB libspdk_bdev_passthru.a 00:02:49.420 CC module/bdev/raid/bdev_raid_rpc.o 00:02:49.420 CC module/bdev/raid/bdev_raid_sb.o 00:02:49.420 LIB libspdk_bdev_split.a 00:02:49.420 CC module/bdev/raid/raid0.o 00:02:49.420 SO libspdk_bdev_passthru.so.6.0 00:02:49.420 SYMLINK libspdk_bdev_lvol.so 00:02:49.420 SO libspdk_bdev_split.so.6.0 00:02:49.420 SYMLINK libspdk_bdev_passthru.so 00:02:49.420 SYMLINK libspdk_bdev_split.so 00:02:49.420 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:49.678 CC module/bdev/aio/bdev_aio.o 00:02:49.678 CC module/bdev/aio/bdev_aio_rpc.o 00:02:49.678 CC module/bdev/ftl/bdev_ftl.o 00:02:49.678 CC module/bdev/raid/raid1.o 00:02:49.678 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:49.678 CC module/bdev/iscsi/bdev_iscsi.o 00:02:49.678 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:49.678 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:49.678 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:49.936 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:49.936 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:49.936 LIB libspdk_bdev_zone_block.a 00:02:49.936 LIB libspdk_bdev_aio.a 00:02:49.936 CC module/bdev/raid/concat.o 00:02:49.936 SO libspdk_bdev_aio.so.6.0 00:02:49.936 SO libspdk_bdev_zone_block.so.6.0 00:02:49.936 SYMLINK libspdk_bdev_aio.so 00:02:49.936 SYMLINK libspdk_bdev_zone_block.so 00:02:49.936 LIB libspdk_bdev_iscsi.a 00:02:50.193 SO libspdk_bdev_iscsi.so.6.0 00:02:50.193 LIB libspdk_bdev_ftl.a 00:02:50.193 SO libspdk_bdev_ftl.so.6.0 00:02:50.193 SYMLINK libspdk_bdev_iscsi.so 00:02:50.193 LIB libspdk_bdev_raid.a 00:02:50.193 LIB libspdk_bdev_virtio.a 00:02:50.193 SO libspdk_bdev_virtio.so.6.0 00:02:50.193 SYMLINK libspdk_bdev_ftl.so 00:02:50.193 SO libspdk_bdev_raid.so.6.0 00:02:50.193 SYMLINK libspdk_bdev_raid.so 00:02:50.193 SYMLINK libspdk_bdev_virtio.so 00:02:51.125 LIB libspdk_bdev_nvme.a 00:02:51.125 SO libspdk_bdev_nvme.so.7.0 00:02:51.383 SYMLINK libspdk_bdev_nvme.so 00:02:51.949 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:51.949 CC module/event/subsystems/scheduler/scheduler.o 00:02:51.949 CC module/event/subsystems/sock/sock.o 00:02:51.949 CC module/event/subsystems/keyring/keyring.o 00:02:51.949 CC module/event/subsystems/vmd/vmd.o 00:02:51.949 CC module/event/subsystems/iobuf/iobuf.o 00:02:51.949 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:51.949 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:51.949 LIB libspdk_event_keyring.a 00:02:51.949 LIB libspdk_event_vhost_blk.a 00:02:51.949 LIB libspdk_event_scheduler.a 00:02:51.949 LIB libspdk_event_sock.a 00:02:51.949 LIB libspdk_event_iobuf.a 00:02:51.949 LIB libspdk_event_vmd.a 00:02:51.949 SO libspdk_event_keyring.so.1.0 00:02:51.949 SO libspdk_event_vhost_blk.so.3.0 00:02:51.949 SO libspdk_event_sock.so.5.0 00:02:51.949 SO libspdk_event_scheduler.so.4.0 00:02:51.949 SO libspdk_event_iobuf.so.3.0 00:02:51.949 SO libspdk_event_vmd.so.6.0 00:02:52.206 SYMLINK libspdk_event_keyring.so 00:02:52.206 SYMLINK libspdk_event_vhost_blk.so 00:02:52.206 SYMLINK libspdk_event_sock.so 00:02:52.206 SYMLINK libspdk_event_scheduler.so 00:02:52.206 SYMLINK libspdk_event_iobuf.so 00:02:52.206 SYMLINK libspdk_event_vmd.so 00:02:52.463 CC module/event/subsystems/accel/accel.o 00:02:52.463 LIB libspdk_event_accel.a 00:02:52.720 SO libspdk_event_accel.so.6.0 00:02:52.720 SYMLINK libspdk_event_accel.so 00:02:52.977 CC module/event/subsystems/bdev/bdev.o 00:02:53.234 LIB libspdk_event_bdev.a 00:02:53.234 SO libspdk_event_bdev.so.6.0 00:02:53.235 SYMLINK libspdk_event_bdev.so 00:02:53.492 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:53.492 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:53.492 CC module/event/subsystems/nbd/nbd.o 00:02:53.492 CC module/event/subsystems/scsi/scsi.o 00:02:53.492 CC module/event/subsystems/ublk/ublk.o 00:02:53.751 LIB libspdk_event_scsi.a 00:02:53.751 LIB libspdk_event_ublk.a 00:02:53.751 LIB libspdk_event_nbd.a 00:02:53.751 SO libspdk_event_scsi.so.6.0 00:02:53.751 SO libspdk_event_ublk.so.3.0 00:02:53.751 SO libspdk_event_nbd.so.6.0 00:02:53.751 LIB libspdk_event_nvmf.a 00:02:53.751 SYMLINK libspdk_event_scsi.so 00:02:53.751 SYMLINK libspdk_event_nbd.so 00:02:53.751 SYMLINK libspdk_event_ublk.so 00:02:53.751 SO libspdk_event_nvmf.so.6.0 00:02:54.009 SYMLINK libspdk_event_nvmf.so 00:02:54.009 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:54.009 CC module/event/subsystems/iscsi/iscsi.o 00:02:54.266 LIB libspdk_event_vhost_scsi.a 00:02:54.266 SO libspdk_event_vhost_scsi.so.3.0 00:02:54.266 LIB libspdk_event_iscsi.a 00:02:54.266 SO libspdk_event_iscsi.so.6.0 00:02:54.523 SYMLINK libspdk_event_vhost_scsi.so 00:02:54.523 SYMLINK libspdk_event_iscsi.so 00:02:54.523 SO libspdk.so.6.0 00:02:54.523 SYMLINK libspdk.so 00:02:54.780 CC test/rpc_client/rpc_client_test.o 00:02:54.780 TEST_HEADER include/spdk/accel.h 00:02:54.780 CXX app/trace/trace.o 00:02:54.780 TEST_HEADER include/spdk/accel_module.h 00:02:54.780 TEST_HEADER include/spdk/assert.h 00:02:54.780 TEST_HEADER include/spdk/barrier.h 00:02:54.780 TEST_HEADER include/spdk/base64.h 00:02:54.780 TEST_HEADER include/spdk/bdev.h 00:02:54.780 TEST_HEADER include/spdk/bdev_module.h 00:02:54.780 TEST_HEADER include/spdk/bdev_zone.h 00:02:54.780 TEST_HEADER include/spdk/bit_array.h 00:02:54.780 TEST_HEADER include/spdk/bit_pool.h 00:02:54.780 TEST_HEADER include/spdk/blob_bdev.h 00:02:54.780 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:54.780 TEST_HEADER include/spdk/blobfs.h 00:02:54.780 TEST_HEADER include/spdk/blob.h 00:02:54.780 TEST_HEADER include/spdk/conf.h 00:02:54.780 TEST_HEADER include/spdk/config.h 00:02:54.780 TEST_HEADER include/spdk/cpuset.h 00:02:55.039 TEST_HEADER include/spdk/crc16.h 00:02:55.039 TEST_HEADER include/spdk/crc32.h 00:02:55.039 TEST_HEADER include/spdk/crc64.h 00:02:55.039 TEST_HEADER include/spdk/dif.h 00:02:55.039 TEST_HEADER include/spdk/dma.h 00:02:55.039 TEST_HEADER include/spdk/endian.h 00:02:55.039 TEST_HEADER include/spdk/env_dpdk.h 00:02:55.039 TEST_HEADER include/spdk/env.h 00:02:55.039 TEST_HEADER include/spdk/event.h 00:02:55.039 TEST_HEADER include/spdk/fd_group.h 00:02:55.039 TEST_HEADER include/spdk/fd.h 00:02:55.039 TEST_HEADER include/spdk/file.h 00:02:55.039 TEST_HEADER include/spdk/ftl.h 00:02:55.039 TEST_HEADER include/spdk/gpt_spec.h 00:02:55.039 TEST_HEADER include/spdk/hexlify.h 00:02:55.039 TEST_HEADER include/spdk/histogram_data.h 00:02:55.039 TEST_HEADER include/spdk/idxd.h 00:02:55.039 TEST_HEADER include/spdk/idxd_spec.h 00:02:55.039 TEST_HEADER include/spdk/init.h 00:02:55.039 TEST_HEADER include/spdk/ioat.h 00:02:55.039 TEST_HEADER include/spdk/ioat_spec.h 00:02:55.039 TEST_HEADER include/spdk/iscsi_spec.h 00:02:55.039 TEST_HEADER include/spdk/json.h 00:02:55.039 TEST_HEADER include/spdk/jsonrpc.h 00:02:55.039 CC test/thread/poller_perf/poller_perf.o 00:02:55.039 TEST_HEADER include/spdk/keyring.h 00:02:55.039 CC examples/ioat/perf/perf.o 00:02:55.039 TEST_HEADER include/spdk/keyring_module.h 00:02:55.039 TEST_HEADER include/spdk/likely.h 00:02:55.039 TEST_HEADER include/spdk/log.h 00:02:55.039 CC examples/util/zipf/zipf.o 00:02:55.039 TEST_HEADER include/spdk/lvol.h 00:02:55.039 TEST_HEADER include/spdk/memory.h 00:02:55.039 TEST_HEADER include/spdk/mmio.h 00:02:55.039 TEST_HEADER include/spdk/nbd.h 00:02:55.039 TEST_HEADER include/spdk/net.h 00:02:55.039 TEST_HEADER include/spdk/notify.h 00:02:55.039 TEST_HEADER include/spdk/nvme.h 00:02:55.039 TEST_HEADER include/spdk/nvme_intel.h 00:02:55.039 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:55.039 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:55.039 TEST_HEADER include/spdk/nvme_spec.h 00:02:55.039 TEST_HEADER include/spdk/nvme_zns.h 00:02:55.039 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:55.039 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:55.039 TEST_HEADER include/spdk/nvmf.h 00:02:55.039 CC test/dma/test_dma/test_dma.o 00:02:55.039 CC test/app/bdev_svc/bdev_svc.o 00:02:55.039 TEST_HEADER include/spdk/nvmf_spec.h 00:02:55.039 TEST_HEADER include/spdk/nvmf_transport.h 00:02:55.039 TEST_HEADER include/spdk/opal.h 00:02:55.039 TEST_HEADER include/spdk/opal_spec.h 00:02:55.039 TEST_HEADER include/spdk/pci_ids.h 00:02:55.039 TEST_HEADER include/spdk/pipe.h 00:02:55.039 TEST_HEADER include/spdk/queue.h 00:02:55.039 TEST_HEADER include/spdk/reduce.h 00:02:55.039 TEST_HEADER include/spdk/rpc.h 00:02:55.039 TEST_HEADER include/spdk/scheduler.h 00:02:55.039 TEST_HEADER include/spdk/scsi.h 00:02:55.039 TEST_HEADER include/spdk/scsi_spec.h 00:02:55.039 TEST_HEADER include/spdk/sock.h 00:02:55.040 TEST_HEADER include/spdk/stdinc.h 00:02:55.040 TEST_HEADER include/spdk/string.h 00:02:55.040 TEST_HEADER include/spdk/thread.h 00:02:55.040 TEST_HEADER include/spdk/trace.h 00:02:55.040 TEST_HEADER include/spdk/trace_parser.h 00:02:55.040 TEST_HEADER include/spdk/tree.h 00:02:55.040 LINK rpc_client_test 00:02:55.040 CC test/env/mem_callbacks/mem_callbacks.o 00:02:55.040 TEST_HEADER include/spdk/ublk.h 00:02:55.040 TEST_HEADER include/spdk/util.h 00:02:55.040 TEST_HEADER include/spdk/uuid.h 00:02:55.040 TEST_HEADER include/spdk/version.h 00:02:55.040 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:55.297 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:55.297 TEST_HEADER include/spdk/vhost.h 00:02:55.297 TEST_HEADER include/spdk/vmd.h 00:02:55.297 LINK zipf 00:02:55.297 TEST_HEADER include/spdk/xor.h 00:02:55.297 TEST_HEADER include/spdk/zipf.h 00:02:55.297 CXX test/cpp_headers/accel.o 00:02:55.297 LINK poller_perf 00:02:55.297 LINK bdev_svc 00:02:55.297 LINK ioat_perf 00:02:55.297 CXX test/cpp_headers/accel_module.o 00:02:55.297 LINK spdk_trace 00:02:55.563 CC examples/ioat/verify/verify.o 00:02:55.563 LINK test_dma 00:02:55.563 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:55.563 CXX test/cpp_headers/assert.o 00:02:55.563 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:55.563 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:55.563 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:55.821 CC app/trace_record/trace_record.o 00:02:55.821 LINK verify 00:02:55.821 CXX test/cpp_headers/barrier.o 00:02:55.821 LINK interrupt_tgt 00:02:55.821 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:55.821 LINK mem_callbacks 00:02:55.821 CXX test/cpp_headers/base64.o 00:02:55.821 CC test/event/event_perf/event_perf.o 00:02:56.078 CC test/event/reactor/reactor.o 00:02:56.078 LINK spdk_trace_record 00:02:56.078 LINK nvme_fuzz 00:02:56.078 CC test/env/vtophys/vtophys.o 00:02:56.336 LINK event_perf 00:02:56.336 CXX test/cpp_headers/bdev.o 00:02:56.336 LINK reactor 00:02:56.336 CC test/nvme/aer/aer.o 00:02:56.336 LINK vhost_fuzz 00:02:56.336 LINK vtophys 00:02:56.336 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:56.336 CC app/nvmf_tgt/nvmf_main.o 00:02:56.336 CXX test/cpp_headers/bdev_module.o 00:02:56.593 CC test/event/reactor_perf/reactor_perf.o 00:02:56.593 CC test/event/app_repeat/app_repeat.o 00:02:56.593 LINK aer 00:02:56.593 LINK env_dpdk_post_init 00:02:56.593 CC test/event/scheduler/scheduler.o 00:02:56.593 CC test/app/histogram_perf/histogram_perf.o 00:02:56.593 LINK nvmf_tgt 00:02:56.593 LINK reactor_perf 00:02:56.593 CXX test/cpp_headers/bdev_zone.o 00:02:56.851 LINK app_repeat 00:02:56.851 LINK histogram_perf 00:02:56.851 CC test/nvme/reset/reset.o 00:02:56.851 LINK scheduler 00:02:56.852 CC test/env/memory/memory_ut.o 00:02:56.852 CXX test/cpp_headers/bit_array.o 00:02:56.852 CC test/env/pci/pci_ut.o 00:02:57.110 CC app/spdk_lspci/spdk_lspci.o 00:02:57.110 CC app/iscsi_tgt/iscsi_tgt.o 00:02:57.110 CXX test/cpp_headers/bit_pool.o 00:02:57.110 CXX test/cpp_headers/blob_bdev.o 00:02:57.110 CC app/spdk_tgt/spdk_tgt.o 00:02:57.110 LINK reset 00:02:57.110 LINK spdk_lspci 00:02:57.368 LINK iscsi_tgt 00:02:57.368 CXX test/cpp_headers/blobfs_bdev.o 00:02:57.368 LINK iscsi_fuzz 00:02:57.368 LINK spdk_tgt 00:02:57.368 LINK pci_ut 00:02:57.368 CC test/nvme/sgl/sgl.o 00:02:57.626 CC test/nvme/e2edp/nvme_dp.o 00:02:57.626 CC test/accel/dif/dif.o 00:02:57.626 CXX test/cpp_headers/blobfs.o 00:02:57.884 CC test/nvme/overhead/overhead.o 00:02:57.884 CC test/app/jsoncat/jsoncat.o 00:02:57.884 LINK sgl 00:02:57.884 CXX test/cpp_headers/blob.o 00:02:57.884 CC app/spdk_nvme_perf/perf.o 00:02:57.884 CC test/app/stub/stub.o 00:02:57.884 LINK nvme_dp 00:02:58.142 LINK jsoncat 00:02:58.142 LINK dif 00:02:58.142 CXX test/cpp_headers/conf.o 00:02:58.142 LINK overhead 00:02:58.142 LINK stub 00:02:58.142 LINK memory_ut 00:02:58.399 CXX test/cpp_headers/config.o 00:02:58.399 CXX test/cpp_headers/cpuset.o 00:02:58.399 CC test/blobfs/mkfs/mkfs.o 00:02:58.399 CC test/nvme/err_injection/err_injection.o 00:02:58.399 CC test/nvme/startup/startup.o 00:02:58.399 CC test/nvme/reserve/reserve.o 00:02:58.399 CC test/nvme/simple_copy/simple_copy.o 00:02:58.399 CC test/lvol/esnap/esnap.o 00:02:58.656 CC test/nvme/connect_stress/connect_stress.o 00:02:58.656 CXX test/cpp_headers/crc16.o 00:02:58.656 LINK startup 00:02:58.656 LINK mkfs 00:02:58.656 LINK err_injection 00:02:58.656 LINK reserve 00:02:58.656 LINK spdk_nvme_perf 00:02:58.656 LINK simple_copy 00:02:58.656 CXX test/cpp_headers/crc32.o 00:02:58.914 LINK connect_stress 00:02:58.914 CXX test/cpp_headers/crc64.o 00:02:59.172 CC examples/sock/hello_world/hello_sock.o 00:02:59.172 CC app/spdk_nvme_identify/identify.o 00:02:59.172 CC examples/thread/thread/thread_ex.o 00:02:59.172 CC test/bdev/bdevio/bdevio.o 00:02:59.172 CC test/nvme/boot_partition/boot_partition.o 00:02:59.430 CXX test/cpp_headers/dif.o 00:02:59.430 CC examples/vmd/lsvmd/lsvmd.o 00:02:59.430 CC examples/vmd/led/led.o 00:02:59.430 LINK led 00:02:59.430 CXX test/cpp_headers/dma.o 00:02:59.430 LINK hello_sock 00:02:59.430 LINK lsvmd 00:02:59.430 LINK boot_partition 00:02:59.688 LINK thread 00:02:59.688 LINK bdevio 00:02:59.688 CXX test/cpp_headers/endian.o 00:02:59.688 CXX test/cpp_headers/env_dpdk.o 00:02:59.945 CC test/nvme/compliance/nvme_compliance.o 00:02:59.945 CC test/nvme/fused_ordering/fused_ordering.o 00:02:59.945 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:59.945 CXX test/cpp_headers/env.o 00:03:00.202 CC examples/idxd/perf/perf.o 00:03:00.202 CC test/nvme/fdp/fdp.o 00:03:00.202 LINK spdk_nvme_identify 00:03:00.202 CC test/nvme/cuse/cuse.o 00:03:00.202 CXX test/cpp_headers/event.o 00:03:00.202 LINK nvme_compliance 00:03:00.202 LINK fused_ordering 00:03:00.202 LINK doorbell_aers 00:03:00.459 CXX test/cpp_headers/fd_group.o 00:03:00.459 LINK idxd_perf 00:03:00.459 LINK fdp 00:03:00.459 CC app/spdk_nvme_discover/discovery_aer.o 00:03:00.715 CC app/spdk_top/spdk_top.o 00:03:00.715 CXX test/cpp_headers/fd.o 00:03:00.715 CC app/spdk_dd/spdk_dd.o 00:03:00.715 CC app/vhost/vhost.o 00:03:00.715 LINK spdk_nvme_discover 00:03:00.715 CXX test/cpp_headers/file.o 00:03:00.715 CC examples/nvme/hello_world/hello_world.o 00:03:00.984 CC app/fio/nvme/fio_plugin.o 00:03:00.984 LINK vhost 00:03:00.984 CXX test/cpp_headers/ftl.o 00:03:00.984 LINK spdk_dd 00:03:00.984 LINK hello_world 00:03:01.269 CXX test/cpp_headers/gpt_spec.o 00:03:01.269 CC app/fio/bdev/fio_plugin.o 00:03:01.269 CXX test/cpp_headers/hexlify.o 00:03:01.269 CXX test/cpp_headers/histogram_data.o 00:03:01.269 CXX test/cpp_headers/idxd.o 00:03:01.269 CXX test/cpp_headers/idxd_spec.o 00:03:01.269 CC examples/nvme/reconnect/reconnect.o 00:03:01.539 LINK spdk_top 00:03:01.539 LINK spdk_nvme 00:03:01.539 CXX test/cpp_headers/init.o 00:03:01.539 CXX test/cpp_headers/ioat.o 00:03:01.539 CXX test/cpp_headers/ioat_spec.o 00:03:01.539 LINK cuse 00:03:01.797 LINK spdk_bdev 00:03:01.797 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:01.797 CXX test/cpp_headers/iscsi_spec.o 00:03:01.797 LINK reconnect 00:03:01.797 CC examples/nvme/arbitration/arbitration.o 00:03:01.797 CC examples/accel/perf/accel_perf.o 00:03:02.054 CXX test/cpp_headers/json.o 00:03:02.054 CC examples/blob/hello_world/hello_blob.o 00:03:02.054 CC examples/nvme/hotplug/hotplug.o 00:03:02.054 CC examples/blob/cli/blobcli.o 00:03:02.054 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:02.054 CXX test/cpp_headers/jsonrpc.o 00:03:02.312 LINK arbitration 00:03:02.312 LINK hotplug 00:03:02.312 LINK hello_blob 00:03:02.312 LINK nvme_manage 00:03:02.312 LINK cmb_copy 00:03:02.312 LINK accel_perf 00:03:02.312 CXX test/cpp_headers/keyring.o 00:03:02.312 CXX test/cpp_headers/keyring_module.o 00:03:02.312 CXX test/cpp_headers/likely.o 00:03:02.312 CC examples/nvme/abort/abort.o 00:03:02.570 CXX test/cpp_headers/log.o 00:03:02.570 CXX test/cpp_headers/lvol.o 00:03:02.570 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:02.570 LINK blobcli 00:03:02.570 CXX test/cpp_headers/memory.o 00:03:02.570 CXX test/cpp_headers/mmio.o 00:03:02.570 CXX test/cpp_headers/nbd.o 00:03:02.570 CXX test/cpp_headers/net.o 00:03:02.570 CXX test/cpp_headers/notify.o 00:03:02.570 CXX test/cpp_headers/nvme.o 00:03:02.570 CXX test/cpp_headers/nvme_intel.o 00:03:02.829 LINK pmr_persistence 00:03:02.829 CXX test/cpp_headers/nvme_ocssd.o 00:03:02.829 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:02.829 CXX test/cpp_headers/nvme_spec.o 00:03:02.829 LINK abort 00:03:02.829 CXX test/cpp_headers/nvme_zns.o 00:03:02.829 CXX test/cpp_headers/nvmf_cmd.o 00:03:02.829 CC examples/bdev/hello_world/hello_bdev.o 00:03:02.829 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:02.829 CXX test/cpp_headers/nvmf.o 00:03:02.829 CXX test/cpp_headers/nvmf_spec.o 00:03:02.829 CXX test/cpp_headers/nvmf_transport.o 00:03:02.829 CXX test/cpp_headers/opal.o 00:03:03.087 CXX test/cpp_headers/opal_spec.o 00:03:03.087 CXX test/cpp_headers/pci_ids.o 00:03:03.087 CXX test/cpp_headers/pipe.o 00:03:03.087 CXX test/cpp_headers/queue.o 00:03:03.087 LINK hello_bdev 00:03:03.087 CXX test/cpp_headers/reduce.o 00:03:03.087 CXX test/cpp_headers/rpc.o 00:03:03.087 CXX test/cpp_headers/scheduler.o 00:03:03.087 CXX test/cpp_headers/scsi.o 00:03:03.087 CXX test/cpp_headers/scsi_spec.o 00:03:03.087 CXX test/cpp_headers/sock.o 00:03:03.087 CXX test/cpp_headers/stdinc.o 00:03:03.345 CXX test/cpp_headers/string.o 00:03:03.345 CC examples/bdev/bdevperf/bdevperf.o 00:03:03.345 CXX test/cpp_headers/thread.o 00:03:03.345 CXX test/cpp_headers/trace.o 00:03:03.345 CXX test/cpp_headers/trace_parser.o 00:03:03.345 CXX test/cpp_headers/tree.o 00:03:03.345 CXX test/cpp_headers/ublk.o 00:03:03.345 CXX test/cpp_headers/util.o 00:03:03.345 CXX test/cpp_headers/uuid.o 00:03:03.345 CXX test/cpp_headers/version.o 00:03:03.345 CXX test/cpp_headers/vfio_user_pci.o 00:03:03.345 CXX test/cpp_headers/vfio_user_spec.o 00:03:03.345 CXX test/cpp_headers/vhost.o 00:03:03.345 CXX test/cpp_headers/vmd.o 00:03:03.345 CXX test/cpp_headers/xor.o 00:03:03.603 CXX test/cpp_headers/zipf.o 00:03:03.862 LINK esnap 00:03:04.231 LINK bdevperf 00:03:04.496 CC examples/nvmf/nvmf/nvmf.o 00:03:05.062 LINK nvmf 00:03:05.319 00:03:05.319 real 1m9.195s 00:03:05.319 user 7m6.843s 00:03:05.319 sys 1m45.690s 00:03:05.319 16:18:23 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:05.319 16:18:23 make -- common/autotest_common.sh@10 -- $ set +x 00:03:05.319 ************************************ 00:03:05.319 END TEST make 00:03:05.319 ************************************ 00:03:05.319 16:18:23 -- common/autotest_common.sh@1142 -- $ return 0 00:03:05.319 16:18:23 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:05.319 16:18:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:05.319 16:18:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:05.319 16:18:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.319 16:18:23 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:05.319 16:18:23 -- pm/common@44 -- $ pid=5310 00:03:05.319 16:18:23 -- pm/common@50 -- $ kill -TERM 5310 00:03:05.319 16:18:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.319 16:18:23 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:05.319 16:18:23 -- pm/common@44 -- $ pid=5312 00:03:05.319 16:18:23 -- pm/common@50 -- $ kill -TERM 5312 00:03:05.319 16:18:23 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:05.319 16:18:23 -- nvmf/common.sh@7 -- # uname -s 00:03:05.319 16:18:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:05.319 16:18:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:05.319 16:18:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:05.319 16:18:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:05.319 16:18:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:05.319 16:18:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:05.319 16:18:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:05.319 16:18:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:05.319 16:18:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:05.319 16:18:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:05.319 16:18:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:03:05.319 16:18:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:03:05.319 16:18:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:05.319 16:18:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:05.319 16:18:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:05.319 16:18:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:05.319 16:18:23 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:05.319 16:18:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:05.319 16:18:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:05.319 16:18:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:05.319 16:18:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.319 16:18:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.319 16:18:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.319 16:18:23 -- paths/export.sh@5 -- # export PATH 00:03:05.319 16:18:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.319 16:18:23 -- nvmf/common.sh@47 -- # : 0 00:03:05.319 16:18:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:05.319 16:18:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:05.319 16:18:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:05.319 16:18:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:05.319 16:18:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:05.319 16:18:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:05.319 16:18:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:05.319 16:18:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:05.320 16:18:23 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:05.320 16:18:23 -- spdk/autotest.sh@32 -- # uname -s 00:03:05.320 16:18:23 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:05.320 16:18:23 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:05.320 16:18:23 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:05.320 16:18:23 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:05.320 16:18:23 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:05.320 16:18:23 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:05.577 16:18:23 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:05.577 16:18:23 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:05.577 16:18:23 -- spdk/autotest.sh@48 -- # udevadm_pid=54712 00:03:05.577 16:18:23 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:05.577 16:18:23 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:05.577 16:18:23 -- pm/common@17 -- # local monitor 00:03:05.577 16:18:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.577 16:18:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.577 16:18:23 -- pm/common@25 -- # sleep 1 00:03:05.577 16:18:23 -- pm/common@21 -- # date +%s 00:03:05.577 16:18:23 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721578703 00:03:05.577 16:18:23 -- pm/common@21 -- # date +%s 00:03:05.577 16:18:23 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721578703 00:03:05.577 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721578703_collect-vmstat.pm.log 00:03:05.577 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721578703_collect-cpu-load.pm.log 00:03:06.511 16:18:24 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:06.511 16:18:24 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:06.511 16:18:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:06.511 16:18:24 -- common/autotest_common.sh@10 -- # set +x 00:03:06.511 16:18:24 -- spdk/autotest.sh@59 -- # create_test_list 00:03:06.511 16:18:24 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:06.511 16:18:24 -- common/autotest_common.sh@10 -- # set +x 00:03:06.511 16:18:24 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:06.511 16:18:24 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:06.511 16:18:24 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:06.511 16:18:24 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:06.511 16:18:24 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:06.511 16:18:24 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:06.511 16:18:24 -- common/autotest_common.sh@1455 -- # uname 00:03:06.511 16:18:24 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:06.511 16:18:24 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:06.511 16:18:24 -- common/autotest_common.sh@1475 -- # uname 00:03:06.511 16:18:24 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:06.511 16:18:24 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:06.511 16:18:24 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:06.511 16:18:24 -- spdk/autotest.sh@72 -- # hash lcov 00:03:06.511 16:18:24 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:06.511 16:18:24 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:06.511 --rc lcov_branch_coverage=1 00:03:06.511 --rc lcov_function_coverage=1 00:03:06.511 --rc genhtml_branch_coverage=1 00:03:06.511 --rc genhtml_function_coverage=1 00:03:06.511 --rc genhtml_legend=1 00:03:06.511 --rc geninfo_all_blocks=1 00:03:06.511 ' 00:03:06.511 16:18:24 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:06.511 --rc lcov_branch_coverage=1 00:03:06.511 --rc lcov_function_coverage=1 00:03:06.511 --rc genhtml_branch_coverage=1 00:03:06.511 --rc genhtml_function_coverage=1 00:03:06.511 --rc genhtml_legend=1 00:03:06.511 --rc geninfo_all_blocks=1 00:03:06.511 ' 00:03:06.511 16:18:24 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:06.511 --rc lcov_branch_coverage=1 00:03:06.511 --rc lcov_function_coverage=1 00:03:06.511 --rc genhtml_branch_coverage=1 00:03:06.511 --rc genhtml_function_coverage=1 00:03:06.511 --rc genhtml_legend=1 00:03:06.511 --rc geninfo_all_blocks=1 00:03:06.511 --no-external' 00:03:06.511 16:18:24 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:06.511 --rc lcov_branch_coverage=1 00:03:06.511 --rc lcov_function_coverage=1 00:03:06.511 --rc genhtml_branch_coverage=1 00:03:06.511 --rc genhtml_function_coverage=1 00:03:06.511 --rc genhtml_legend=1 00:03:06.511 --rc geninfo_all_blocks=1 00:03:06.511 --no-external' 00:03:06.511 16:18:24 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:06.770 lcov: LCOV version 1.14 00:03:06.770 16:18:24 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:21.636 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:21.636 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:33.854 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:33.854 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:33.855 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:33.855 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:36.494 16:18:54 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:36.494 16:18:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:36.494 16:18:54 -- common/autotest_common.sh@10 -- # set +x 00:03:36.494 16:18:54 -- spdk/autotest.sh@91 -- # rm -f 00:03:36.494 16:18:54 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:37.061 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:37.061 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:37.061 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:37.061 16:18:55 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:37.061 16:18:55 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:37.061 16:18:55 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:37.061 16:18:55 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:37.061 16:18:55 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:37.061 16:18:55 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:37.061 16:18:55 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:37.061 16:18:55 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:37.061 16:18:55 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:37.061 16:18:55 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:37.061 16:18:55 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:37.061 16:18:55 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:37.061 16:18:55 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:37.061 16:18:55 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:37.061 16:18:55 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:37.061 16:18:55 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:37.061 16:18:55 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:37.061 16:18:55 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:37.061 16:18:55 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:37.061 16:18:55 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:37.061 16:18:55 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:37.061 16:18:55 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:37.061 16:18:55 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:37.061 16:18:55 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:37.061 16:18:55 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:37.061 16:18:55 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:37.061 16:18:55 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:37.061 16:18:55 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:37.061 16:18:55 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:37.061 16:18:55 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:37.061 No valid GPT data, bailing 00:03:37.061 16:18:55 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:37.061 16:18:55 -- scripts/common.sh@391 -- # pt= 00:03:37.061 16:18:55 -- scripts/common.sh@392 -- # return 1 00:03:37.061 16:18:55 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:37.061 1+0 records in 00:03:37.061 1+0 records out 00:03:37.061 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.004059 s, 258 MB/s 00:03:37.061 16:18:55 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:37.061 16:18:55 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:37.061 16:18:55 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:37.061 16:18:55 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:37.061 16:18:55 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:37.319 No valid GPT data, bailing 00:03:37.319 16:18:55 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:37.319 16:18:55 -- scripts/common.sh@391 -- # pt= 00:03:37.319 16:18:55 -- scripts/common.sh@392 -- # return 1 00:03:37.319 16:18:55 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:37.319 1+0 records in 00:03:37.319 1+0 records out 00:03:37.319 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00491614 s, 213 MB/s 00:03:37.319 16:18:55 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:37.319 16:18:55 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:37.319 16:18:55 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:03:37.319 16:18:55 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:03:37.319 16:18:55 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:37.319 No valid GPT data, bailing 00:03:37.319 16:18:55 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:37.319 16:18:55 -- scripts/common.sh@391 -- # pt= 00:03:37.319 16:18:55 -- scripts/common.sh@392 -- # return 1 00:03:37.319 16:18:55 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:37.319 1+0 records in 00:03:37.319 1+0 records out 00:03:37.319 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00474109 s, 221 MB/s 00:03:37.319 16:18:55 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:37.319 16:18:55 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:37.319 16:18:55 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:03:37.319 16:18:55 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:03:37.319 16:18:55 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:37.319 No valid GPT data, bailing 00:03:37.319 16:18:55 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:37.319 16:18:55 -- scripts/common.sh@391 -- # pt= 00:03:37.319 16:18:55 -- scripts/common.sh@392 -- # return 1 00:03:37.319 16:18:55 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:37.319 1+0 records in 00:03:37.319 1+0 records out 00:03:37.319 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00464161 s, 226 MB/s 00:03:37.319 16:18:55 -- spdk/autotest.sh@118 -- # sync 00:03:37.577 16:18:55 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:37.577 16:18:55 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:37.577 16:18:55 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:39.476 16:18:57 -- spdk/autotest.sh@124 -- # uname -s 00:03:39.476 16:18:57 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:39.476 16:18:57 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:39.476 16:18:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.476 16:18:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.476 16:18:57 -- common/autotest_common.sh@10 -- # set +x 00:03:39.476 ************************************ 00:03:39.476 START TEST setup.sh 00:03:39.476 ************************************ 00:03:39.476 16:18:57 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:39.476 * Looking for test storage... 00:03:39.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:39.476 16:18:57 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:39.476 16:18:57 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:39.476 16:18:57 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:39.476 16:18:57 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:39.476 16:18:57 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.476 16:18:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:39.476 ************************************ 00:03:39.476 START TEST acl 00:03:39.476 ************************************ 00:03:39.476 16:18:57 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:39.476 * Looking for test storage... 00:03:39.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:39.476 16:18:57 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:39.476 16:18:57 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:39.476 16:18:57 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:39.476 16:18:57 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:39.476 16:18:57 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:39.476 16:18:57 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:39.476 16:18:57 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:39.476 16:18:57 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:39.476 16:18:57 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:39.476 16:18:57 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:39.476 16:18:57 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:39.476 16:18:57 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:39.476 16:18:57 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:39.476 16:18:57 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:39.476 16:18:57 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:39.476 16:18:57 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:39.476 16:18:57 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:39.476 16:18:57 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:39.476 16:18:57 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:39.476 16:18:57 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:39.476 16:18:57 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:39.476 16:18:57 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:39.476 16:18:57 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:39.476 16:18:57 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:39.476 16:18:57 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:39.476 16:18:57 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:39.476 16:18:57 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:39.476 16:18:57 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:39.476 16:18:57 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:39.476 16:18:57 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:39.476 16:18:57 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:40.429 16:18:58 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:40.429 16:18:58 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:40.430 16:18:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.430 16:18:58 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:40.430 16:18:58 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.430 16:18:58 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:40.995 16:18:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:40.995 16:18:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:40.995 16:18:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.995 Hugepages 00:03:40.995 node hugesize free / total 00:03:40.995 16:18:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:40.995 16:18:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:40.995 16:18:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.995 00:03:40.995 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:40.995 16:18:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:40.995 16:18:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:40.995 16:18:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.995 16:18:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:40.995 16:18:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:40.995 16:18:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:40.995 16:18:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:40.995 16:18:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:40.995 16:18:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:40.995 16:18:59 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:40.995 16:18:59 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:40.995 16:18:59 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:40.995 16:18:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.253 16:18:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:41.253 16:18:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:41.253 16:18:59 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:41.253 16:18:59 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:41.253 16:18:59 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:41.253 16:18:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:41.253 16:18:59 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:41.253 16:18:59 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:41.253 16:18:59 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:41.253 16:18:59 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.253 16:18:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:41.253 ************************************ 00:03:41.253 START TEST denied 00:03:41.253 ************************************ 00:03:41.253 16:18:59 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:41.253 16:18:59 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:41.253 16:18:59 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:41.253 16:18:59 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.253 16:18:59 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:41.253 16:18:59 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:42.187 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:42.187 16:19:00 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:42.187 16:19:00 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:42.187 16:19:00 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:42.187 16:19:00 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:42.187 16:19:00 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:42.187 16:19:00 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:42.187 16:19:00 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:42.187 16:19:00 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:42.187 16:19:00 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:42.187 16:19:00 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:42.752 00:03:42.752 real 0m1.439s 00:03:42.752 user 0m0.619s 00:03:42.752 sys 0m0.777s 00:03:42.752 16:19:00 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.752 16:19:00 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:42.752 ************************************ 00:03:42.752 END TEST denied 00:03:42.752 ************************************ 00:03:42.752 16:19:00 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:42.752 16:19:00 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:42.752 16:19:00 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.752 16:19:00 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.752 16:19:00 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:42.752 ************************************ 00:03:42.752 START TEST allowed 00:03:42.752 ************************************ 00:03:42.752 16:19:00 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:42.752 16:19:00 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:42.752 16:19:00 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:42.752 16:19:00 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.752 16:19:00 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:42.752 16:19:00 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:43.685 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:43.685 16:19:01 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:03:43.685 16:19:01 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:43.685 16:19:01 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:43.685 16:19:01 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:03:43.685 16:19:01 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:03:43.685 16:19:01 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:43.685 16:19:01 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:43.685 16:19:01 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:43.685 16:19:01 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:43.685 16:19:01 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:44.264 00:03:44.265 real 0m1.522s 00:03:44.265 user 0m0.679s 00:03:44.265 sys 0m0.830s 00:03:44.265 16:19:02 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.265 ************************************ 00:03:44.265 END TEST allowed 00:03:44.265 ************************************ 00:03:44.265 16:19:02 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:44.265 16:19:02 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:44.265 00:03:44.265 real 0m4.754s 00:03:44.265 user 0m2.133s 00:03:44.265 sys 0m2.568s 00:03:44.265 16:19:02 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.265 ************************************ 00:03:44.265 END TEST acl 00:03:44.265 ************************************ 00:03:44.265 16:19:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:44.265 16:19:02 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:44.265 16:19:02 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:44.265 16:19:02 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.265 16:19:02 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.265 16:19:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:44.265 ************************************ 00:03:44.265 START TEST hugepages 00:03:44.265 ************************************ 00:03:44.265 16:19:02 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:44.265 * Looking for test storage... 00:03:44.265 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 5872780 kB' 'MemAvailable: 7400676 kB' 'Buffers: 2436 kB' 'Cached: 1739232 kB' 'SwapCached: 0 kB' 'Active: 477572 kB' 'Inactive: 1368980 kB' 'Active(anon): 115372 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 316 kB' 'Writeback: 0 kB' 'AnonPages: 106568 kB' 'Mapped: 48568 kB' 'Shmem: 10488 kB' 'KReclaimable: 67312 kB' 'Slab: 140508 kB' 'SReclaimable: 67312 kB' 'SUnreclaim: 73196 kB' 'KernelStack: 6476 kB' 'PageTables: 4628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412444 kB' 'Committed_AS: 336340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.266 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.266 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.266 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.266 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.266 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.266 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.266 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.266 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.266 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.266 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.523 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.523 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.523 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.523 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.523 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:44.523 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:44.523 16:19:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:44.523 16:19:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:44.523 16:19:02 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:44.523 16:19:02 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:44.523 16:19:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:44.523 16:19:02 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:44.523 16:19:02 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:44.524 16:19:02 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:44.524 16:19:02 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:44.524 16:19:02 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:44.524 16:19:02 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:44.524 16:19:02 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:44.524 16:19:02 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:44.524 16:19:02 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.524 16:19:02 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:44.524 16:19:02 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:44.524 16:19:02 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.524 16:19:02 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:44.524 16:19:02 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:44.524 16:19:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:44.524 16:19:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:44.524 16:19:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:44.524 16:19:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:44.524 16:19:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:44.524 16:19:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:44.524 16:19:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:44.524 16:19:02 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:44.524 16:19:02 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.524 16:19:02 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.524 16:19:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:44.524 ************************************ 00:03:44.524 START TEST default_setup 00:03:44.524 ************************************ 00:03:44.524 16:19:02 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:44.524 16:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:44.524 16:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:44.524 16:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:44.524 16:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:44.524 16:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:44.524 16:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:44.524 16:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.524 16:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:44.524 16:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:44.524 16:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:44.524 16:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.524 16:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:44.524 16:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:44.524 16:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.524 16:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.524 16:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:44.524 16:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:44.524 16:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:44.524 16:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:44.524 16:19:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:44.524 16:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.524 16:19:02 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:45.088 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:45.088 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:45.350 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 7984092 kB' 'MemAvailable: 9511812 kB' 'Buffers: 2436 kB' 'Cached: 1739220 kB' 'SwapCached: 0 kB' 'Active: 494320 kB' 'Inactive: 1368980 kB' 'Active(anon): 132120 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123272 kB' 'Mapped: 48744 kB' 'Shmem: 10464 kB' 'KReclaimable: 66956 kB' 'Slab: 140056 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73100 kB' 'KernelStack: 6448 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 353360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.350 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.351 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 7983848 kB' 'MemAvailable: 9511568 kB' 'Buffers: 2436 kB' 'Cached: 1739220 kB' 'SwapCached: 0 kB' 'Active: 494264 kB' 'Inactive: 1368980 kB' 'Active(anon): 132064 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123172 kB' 'Mapped: 48684 kB' 'Shmem: 10464 kB' 'KReclaimable: 66956 kB' 'Slab: 140056 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73100 kB' 'KernelStack: 6384 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 353360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.352 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.353 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 7984100 kB' 'MemAvailable: 9511820 kB' 'Buffers: 2436 kB' 'Cached: 1739220 kB' 'SwapCached: 0 kB' 'Active: 494188 kB' 'Inactive: 1368980 kB' 'Active(anon): 131988 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368980 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123168 kB' 'Mapped: 48828 kB' 'Shmem: 10464 kB' 'KReclaimable: 66956 kB' 'Slab: 140052 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73096 kB' 'KernelStack: 6384 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 353360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.354 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:45.355 nr_hugepages=1024 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:45.355 resv_hugepages=0 00:03:45.355 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.355 surplus_hugepages=0 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.356 anon_hugepages=0 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 7984352 kB' 'MemAvailable: 9512076 kB' 'Buffers: 2436 kB' 'Cached: 1739220 kB' 'SwapCached: 0 kB' 'Active: 493992 kB' 'Inactive: 1368984 kB' 'Active(anon): 131792 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368984 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122892 kB' 'Mapped: 48568 kB' 'Shmem: 10464 kB' 'KReclaimable: 66956 kB' 'Slab: 140052 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73096 kB' 'KernelStack: 6352 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 353360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.356 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 7984352 kB' 'MemUsed: 4257632 kB' 'SwapCached: 0 kB' 'Active: 493772 kB' 'Inactive: 1368992 kB' 'Active(anon): 131572 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368992 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1741656 kB' 'Mapped: 48568 kB' 'AnonPages: 122720 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66956 kB' 'Slab: 140048 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73092 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.357 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.358 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.359 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.359 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.359 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.359 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.359 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.359 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:45.359 16:19:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:45.359 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.359 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.359 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.359 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.359 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:45.359 node0=1024 expecting 1024 00:03:45.359 16:19:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:45.359 00:03:45.359 real 0m0.991s 00:03:45.359 user 0m0.487s 00:03:45.359 sys 0m0.459s 00:03:45.359 16:19:03 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.359 16:19:03 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:45.359 ************************************ 00:03:45.359 END TEST default_setup 00:03:45.359 ************************************ 00:03:45.359 16:19:03 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:45.359 16:19:03 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:45.359 16:19:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.359 16:19:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.359 16:19:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.359 ************************************ 00:03:45.359 START TEST per_node_1G_alloc 00:03:45.359 ************************************ 00:03:45.359 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:45.359 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:45.359 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:45.359 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:45.359 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:45.359 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:45.359 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:45.359 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:45.359 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.359 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:45.359 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:45.359 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:45.359 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.359 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:45.359 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:45.359 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.359 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.359 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:45.359 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:45.359 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:45.359 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:45.359 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:45.359 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:45.359 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:45.359 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.359 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:45.929 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:45.929 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.929 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.929 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:45.929 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:45.929 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:45.929 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.929 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.929 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:45.929 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:45.929 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:45.929 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.929 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.929 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.929 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.929 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.929 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.929 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.929 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.929 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.929 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.929 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.929 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.929 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 9029060 kB' 'MemAvailable: 10556796 kB' 'Buffers: 2436 kB' 'Cached: 1739224 kB' 'SwapCached: 0 kB' 'Active: 494376 kB' 'Inactive: 1368996 kB' 'Active(anon): 132176 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123328 kB' 'Mapped: 48680 kB' 'Shmem: 10464 kB' 'KReclaimable: 66956 kB' 'Slab: 140004 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73048 kB' 'KernelStack: 6440 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985308 kB' 'Committed_AS: 353360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:45.929 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.929 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.929 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.929 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.929 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.930 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 9029060 kB' 'MemAvailable: 10556796 kB' 'Buffers: 2436 kB' 'Cached: 1739224 kB' 'SwapCached: 0 kB' 'Active: 493948 kB' 'Inactive: 1368996 kB' 'Active(anon): 131748 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122932 kB' 'Mapped: 48568 kB' 'Shmem: 10464 kB' 'KReclaimable: 66956 kB' 'Slab: 140004 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73048 kB' 'KernelStack: 6400 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985308 kB' 'Committed_AS: 353360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.931 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.932 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.933 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.933 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.933 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.933 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.933 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.933 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.933 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.933 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.933 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.933 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.933 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.933 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.933 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.933 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.933 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.933 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.933 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:45.933 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.933 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.933 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.933 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.933 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.933 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.933 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.933 16:19:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 9029448 kB' 'MemAvailable: 10557184 kB' 'Buffers: 2436 kB' 'Cached: 1739224 kB' 'SwapCached: 0 kB' 'Active: 493964 kB' 'Inactive: 1368996 kB' 'Active(anon): 131764 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122924 kB' 'Mapped: 48568 kB' 'Shmem: 10464 kB' 'KReclaimable: 66956 kB' 'Slab: 140004 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73048 kB' 'KernelStack: 6400 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985308 kB' 'Committed_AS: 353360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.933 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.934 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:45.935 nr_hugepages=512 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:45.935 resv_hugepages=0 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.935 surplus_hugepages=0 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.935 anon_hugepages=0 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 9029448 kB' 'MemAvailable: 10557184 kB' 'Buffers: 2436 kB' 'Cached: 1739224 kB' 'SwapCached: 0 kB' 'Active: 494008 kB' 'Inactive: 1368996 kB' 'Active(anon): 131808 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123004 kB' 'Mapped: 48568 kB' 'Shmem: 10464 kB' 'KReclaimable: 66956 kB' 'Slab: 139996 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73040 kB' 'KernelStack: 6416 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985308 kB' 'Committed_AS: 353360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.935 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.936 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 9029448 kB' 'MemUsed: 3212536 kB' 'SwapCached: 0 kB' 'Active: 493988 kB' 'Inactive: 1368996 kB' 'Active(anon): 131788 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1741660 kB' 'Mapped: 48568 kB' 'AnonPages: 123008 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66956 kB' 'Slab: 139996 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73040 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.937 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.938 node0=512 expecting 512 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:45.938 00:03:45.938 real 0m0.548s 00:03:45.938 user 0m0.286s 00:03:45.938 sys 0m0.296s 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.938 16:19:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:45.938 ************************************ 00:03:45.938 END TEST per_node_1G_alloc 00:03:45.938 ************************************ 00:03:46.196 16:19:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:46.196 16:19:04 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:46.196 16:19:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.196 16:19:04 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.196 16:19:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:46.196 ************************************ 00:03:46.196 START TEST even_2G_alloc 00:03:46.196 ************************************ 00:03:46.196 16:19:04 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:46.196 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:46.196 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:46.196 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:46.196 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.196 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:46.196 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:46.196 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:46.196 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.196 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:46.196 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:46.196 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.196 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.196 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:46.196 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:46.196 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:46.196 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:46.196 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:46.196 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:46.196 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:46.196 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:46.196 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:46.196 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:46.196 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.196 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:46.459 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:46.459 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:46.459 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 7981952 kB' 'MemAvailable: 9509688 kB' 'Buffers: 2436 kB' 'Cached: 1739224 kB' 'SwapCached: 0 kB' 'Active: 494116 kB' 'Inactive: 1368996 kB' 'Active(anon): 131916 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123016 kB' 'Mapped: 48688 kB' 'Shmem: 10464 kB' 'KReclaimable: 66956 kB' 'Slab: 139992 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73036 kB' 'KernelStack: 6440 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 353360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.459 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.460 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 7982204 kB' 'MemAvailable: 9509940 kB' 'Buffers: 2436 kB' 'Cached: 1739224 kB' 'SwapCached: 0 kB' 'Active: 493900 kB' 'Inactive: 1368996 kB' 'Active(anon): 131700 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122772 kB' 'Mapped: 48688 kB' 'Shmem: 10464 kB' 'KReclaimable: 66956 kB' 'Slab: 139992 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73036 kB' 'KernelStack: 6408 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 353360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.461 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.462 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 7982700 kB' 'MemAvailable: 9510436 kB' 'Buffers: 2436 kB' 'Cached: 1739224 kB' 'SwapCached: 0 kB' 'Active: 493764 kB' 'Inactive: 1368996 kB' 'Active(anon): 131564 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122920 kB' 'Mapped: 48568 kB' 'Shmem: 10464 kB' 'KReclaimable: 66956 kB' 'Slab: 139988 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73032 kB' 'KernelStack: 6384 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 353360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.463 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:46.464 nr_hugepages=1024 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:46.464 resv_hugepages=0 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.464 surplus_hugepages=0 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.464 anon_hugepages=0 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.464 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 7982700 kB' 'MemAvailable: 9510436 kB' 'Buffers: 2436 kB' 'Cached: 1739224 kB' 'SwapCached: 0 kB' 'Active: 493696 kB' 'Inactive: 1368996 kB' 'Active(anon): 131496 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122828 kB' 'Mapped: 48568 kB' 'Shmem: 10464 kB' 'KReclaimable: 66956 kB' 'Slab: 139988 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73032 kB' 'KernelStack: 6352 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 353360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.465 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 7982700 kB' 'MemUsed: 4259284 kB' 'SwapCached: 0 kB' 'Active: 493688 kB' 'Inactive: 1368996 kB' 'Active(anon): 131488 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1741660 kB' 'Mapped: 48568 kB' 'AnonPages: 122824 kB' 'Shmem: 10464 kB' 'KernelStack: 6404 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66956 kB' 'Slab: 139988 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73032 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.466 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.467 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.751 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.751 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.751 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.751 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.751 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:46.751 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.751 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.751 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.751 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.751 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:46.752 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.752 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.752 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.752 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.752 node0=1024 expecting 1024 00:03:46.752 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:46.752 16:19:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:46.752 00:03:46.752 real 0m0.522s 00:03:46.752 user 0m0.253s 00:03:46.752 sys 0m0.301s 00:03:46.752 16:19:04 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.752 16:19:04 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:46.752 ************************************ 00:03:46.752 END TEST even_2G_alloc 00:03:46.752 ************************************ 00:03:46.752 16:19:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:46.752 16:19:04 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:46.752 16:19:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.752 16:19:04 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.752 16:19:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:46.752 ************************************ 00:03:46.752 START TEST odd_alloc 00:03:46.752 ************************************ 00:03:46.752 16:19:04 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:46.752 16:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:46.752 16:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:46.752 16:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:46.752 16:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.752 16:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:46.752 16:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:46.752 16:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:46.752 16:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.752 16:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:46.752 16:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:46.752 16:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.752 16:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.752 16:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:46.752 16:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:46.752 16:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:46.752 16:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:46.752 16:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:46.752 16:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:46.752 16:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:46.752 16:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:46.752 16:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:46.752 16:19:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:46.752 16:19:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.752 16:19:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:47.013 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.013 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.013 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 7979928 kB' 'MemAvailable: 9507664 kB' 'Buffers: 2436 kB' 'Cached: 1739224 kB' 'SwapCached: 0 kB' 'Active: 494064 kB' 'Inactive: 1368996 kB' 'Active(anon): 131864 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123196 kB' 'Mapped: 48568 kB' 'Shmem: 10464 kB' 'KReclaimable: 66956 kB' 'Slab: 140012 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73056 kB' 'KernelStack: 6408 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459996 kB' 'Committed_AS: 353492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.013 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.014 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 7979928 kB' 'MemAvailable: 9507664 kB' 'Buffers: 2436 kB' 'Cached: 1739224 kB' 'SwapCached: 0 kB' 'Active: 493804 kB' 'Inactive: 1368996 kB' 'Active(anon): 131604 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122988 kB' 'Mapped: 48568 kB' 'Shmem: 10464 kB' 'KReclaimable: 66956 kB' 'Slab: 140012 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73056 kB' 'KernelStack: 6400 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459996 kB' 'Committed_AS: 353492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.015 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 7979928 kB' 'MemAvailable: 9507664 kB' 'Buffers: 2436 kB' 'Cached: 1739224 kB' 'SwapCached: 0 kB' 'Active: 493784 kB' 'Inactive: 1368996 kB' 'Active(anon): 131584 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122956 kB' 'Mapped: 48568 kB' 'Shmem: 10464 kB' 'KReclaimable: 66956 kB' 'Slab: 140012 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73056 kB' 'KernelStack: 6400 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459996 kB' 'Committed_AS: 353492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.016 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.017 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:47.018 nr_hugepages=1025 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:47.018 resv_hugepages=0 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.018 surplus_hugepages=0 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.018 anon_hugepages=0 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 7979928 kB' 'MemAvailable: 9507664 kB' 'Buffers: 2436 kB' 'Cached: 1739224 kB' 'SwapCached: 0 kB' 'Active: 493784 kB' 'Inactive: 1368996 kB' 'Active(anon): 131584 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122696 kB' 'Mapped: 48568 kB' 'Shmem: 10464 kB' 'KReclaimable: 66956 kB' 'Slab: 140012 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73056 kB' 'KernelStack: 6400 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459996 kB' 'Committed_AS: 353492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.018 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.019 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.020 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 7979928 kB' 'MemUsed: 4262056 kB' 'SwapCached: 0 kB' 'Active: 493800 kB' 'Inactive: 1368996 kB' 'Active(anon): 131600 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1741660 kB' 'Mapped: 48568 kB' 'AnonPages: 122968 kB' 'Shmem: 10464 kB' 'KernelStack: 6384 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66956 kB' 'Slab: 140008 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73052 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.279 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.280 node0=1025 expecting 1025 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:47.280 00:03:47.280 real 0m0.530s 00:03:47.280 user 0m0.273s 00:03:47.280 sys 0m0.290s 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:47.280 16:19:05 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:47.280 ************************************ 00:03:47.280 END TEST odd_alloc 00:03:47.280 ************************************ 00:03:47.280 16:19:05 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:47.280 16:19:05 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:47.280 16:19:05 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:47.280 16:19:05 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.280 16:19:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:47.280 ************************************ 00:03:47.280 START TEST custom_alloc 00:03:47.280 ************************************ 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.280 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:47.541 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.541 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.541 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.541 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:47.541 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:47.541 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:47.541 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.541 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.541 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:47.541 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 9038428 kB' 'MemAvailable: 10566164 kB' 'Buffers: 2436 kB' 'Cached: 1739224 kB' 'SwapCached: 0 kB' 'Active: 494136 kB' 'Inactive: 1368996 kB' 'Active(anon): 131936 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123076 kB' 'Mapped: 48756 kB' 'Shmem: 10464 kB' 'KReclaimable: 66956 kB' 'Slab: 139992 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73036 kB' 'KernelStack: 6388 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985308 kB' 'Committed_AS: 353492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.542 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 9038428 kB' 'MemAvailable: 10566164 kB' 'Buffers: 2436 kB' 'Cached: 1739224 kB' 'SwapCached: 0 kB' 'Active: 493808 kB' 'Inactive: 1368996 kB' 'Active(anon): 131608 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122724 kB' 'Mapped: 48568 kB' 'Shmem: 10464 kB' 'KReclaimable: 66956 kB' 'Slab: 139988 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73032 kB' 'KernelStack: 6400 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985308 kB' 'Committed_AS: 353492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.543 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.544 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 9038780 kB' 'MemAvailable: 10566516 kB' 'Buffers: 2436 kB' 'Cached: 1739224 kB' 'SwapCached: 0 kB' 'Active: 493772 kB' 'Inactive: 1368996 kB' 'Active(anon): 131572 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122948 kB' 'Mapped: 48568 kB' 'Shmem: 10464 kB' 'KReclaimable: 66956 kB' 'Slab: 139988 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73032 kB' 'KernelStack: 6384 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985308 kB' 'Committed_AS: 353492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.545 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.546 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.546 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.546 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.546 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.807 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:47.808 nr_hugepages=512 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:47.808 resv_hugepages=0 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.808 surplus_hugepages=0 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.808 anon_hugepages=0 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 9038780 kB' 'MemAvailable: 10566516 kB' 'Buffers: 2436 kB' 'Cached: 1739224 kB' 'SwapCached: 0 kB' 'Active: 493800 kB' 'Inactive: 1368996 kB' 'Active(anon): 131600 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123024 kB' 'Mapped: 48568 kB' 'Shmem: 10464 kB' 'KReclaimable: 66956 kB' 'Slab: 139988 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73032 kB' 'KernelStack: 6416 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985308 kB' 'Committed_AS: 353492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.808 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.809 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 9038780 kB' 'MemUsed: 3203204 kB' 'SwapCached: 0 kB' 'Active: 493880 kB' 'Inactive: 1368996 kB' 'Active(anon): 131680 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1741660 kB' 'Mapped: 48568 kB' 'AnonPages: 123056 kB' 'Shmem: 10464 kB' 'KernelStack: 6400 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66956 kB' 'Slab: 139988 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73032 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.810 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:47.811 node0=512 expecting 512 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:47.811 00:03:47.811 real 0m0.529s 00:03:47.811 user 0m0.283s 00:03:47.811 sys 0m0.280s 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:47.811 16:19:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:47.811 ************************************ 00:03:47.811 END TEST custom_alloc 00:03:47.811 ************************************ 00:03:47.811 16:19:05 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:47.811 16:19:05 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:47.811 16:19:05 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:47.811 16:19:05 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.811 16:19:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:47.811 ************************************ 00:03:47.811 START TEST no_shrink_alloc 00:03:47.811 ************************************ 00:03:47.811 16:19:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:47.811 16:19:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:47.811 16:19:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:47.811 16:19:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:47.811 16:19:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:47.811 16:19:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:47.811 16:19:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:47.811 16:19:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:47.811 16:19:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:47.811 16:19:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:47.811 16:19:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:47.811 16:19:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.811 16:19:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:47.811 16:19:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:47.811 16:19:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.811 16:19:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.811 16:19:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:47.811 16:19:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:47.811 16:19:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:47.811 16:19:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:47.811 16:19:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:47.811 16:19:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.811 16:19:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:48.071 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:48.071 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:48.071 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 7988588 kB' 'MemAvailable: 9516324 kB' 'Buffers: 2436 kB' 'Cached: 1739224 kB' 'SwapCached: 0 kB' 'Active: 494340 kB' 'Inactive: 1368996 kB' 'Active(anon): 132140 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123260 kB' 'Mapped: 48692 kB' 'Shmem: 10464 kB' 'KReclaimable: 66956 kB' 'Slab: 140008 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73052 kB' 'KernelStack: 6416 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 353492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.071 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.072 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 7988588 kB' 'MemAvailable: 9516324 kB' 'Buffers: 2436 kB' 'Cached: 1739224 kB' 'SwapCached: 0 kB' 'Active: 494284 kB' 'Inactive: 1368996 kB' 'Active(anon): 132084 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123200 kB' 'Mapped: 48572 kB' 'Shmem: 10464 kB' 'KReclaimable: 66956 kB' 'Slab: 140012 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73056 kB' 'KernelStack: 6416 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 353492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.335 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.336 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 7988588 kB' 'MemAvailable: 9516324 kB' 'Buffers: 2436 kB' 'Cached: 1739224 kB' 'SwapCached: 0 kB' 'Active: 493864 kB' 'Inactive: 1368996 kB' 'Active(anon): 131664 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123064 kB' 'Mapped: 48572 kB' 'Shmem: 10464 kB' 'KReclaimable: 66956 kB' 'Slab: 140012 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73056 kB' 'KernelStack: 6416 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 353492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.337 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:48.338 nr_hugepages=1024 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:48.338 resv_hugepages=0 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.338 surplus_hugepages=0 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.338 anon_hugepages=0 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 7988588 kB' 'MemAvailable: 9516324 kB' 'Buffers: 2436 kB' 'Cached: 1739224 kB' 'SwapCached: 0 kB' 'Active: 493860 kB' 'Inactive: 1368996 kB' 'Active(anon): 131660 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122804 kB' 'Mapped: 48572 kB' 'Shmem: 10464 kB' 'KReclaimable: 66956 kB' 'Slab: 140012 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73056 kB' 'KernelStack: 6416 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 353492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.338 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.339 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 7988588 kB' 'MemUsed: 4253396 kB' 'SwapCached: 0 kB' 'Active: 493844 kB' 'Inactive: 1368996 kB' 'Active(anon): 131644 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'FilePages: 1741660 kB' 'Mapped: 48572 kB' 'AnonPages: 123048 kB' 'Shmem: 10464 kB' 'KernelStack: 6400 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66956 kB' 'Slab: 140012 kB' 'SReclaimable: 66956 kB' 'SUnreclaim: 73056 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.340 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.341 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.342 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.342 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.342 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.342 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.342 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.342 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.342 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.342 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.342 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.342 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.342 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.342 node0=1024 expecting 1024 00:03:48.342 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:48.342 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:48.342 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:48.342 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:48.342 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:48.342 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.342 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:48.601 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:48.601 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:48.601 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:48.601 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 7992192 kB' 'MemAvailable: 9519920 kB' 'Buffers: 2436 kB' 'Cached: 1739224 kB' 'SwapCached: 0 kB' 'Active: 489288 kB' 'Inactive: 1368996 kB' 'Active(anon): 127088 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118496 kB' 'Mapped: 47872 kB' 'Shmem: 10464 kB' 'KReclaimable: 66944 kB' 'Slab: 139712 kB' 'SReclaimable: 66944 kB' 'SUnreclaim: 72768 kB' 'KernelStack: 6344 kB' 'PageTables: 3776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 335968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.601 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.602 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.603 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.603 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:48.603 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.603 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.603 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:48.603 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.603 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.603 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.603 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.603 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.603 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.603 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 7992444 kB' 'MemAvailable: 9520172 kB' 'Buffers: 2436 kB' 'Cached: 1739224 kB' 'SwapCached: 0 kB' 'Active: 488884 kB' 'Inactive: 1368996 kB' 'Active(anon): 126684 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118092 kB' 'Mapped: 47832 kB' 'Shmem: 10464 kB' 'KReclaimable: 66944 kB' 'Slab: 139704 kB' 'SReclaimable: 66944 kB' 'SUnreclaim: 72760 kB' 'KernelStack: 6304 kB' 'PageTables: 3788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 335968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.866 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.867 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 7993008 kB' 'MemAvailable: 9520736 kB' 'Buffers: 2436 kB' 'Cached: 1739224 kB' 'SwapCached: 0 kB' 'Active: 488900 kB' 'Inactive: 1368996 kB' 'Active(anon): 126700 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 117836 kB' 'Mapped: 47832 kB' 'Shmem: 10464 kB' 'KReclaimable: 66944 kB' 'Slab: 139704 kB' 'SReclaimable: 66944 kB' 'SUnreclaim: 72760 kB' 'KernelStack: 6304 kB' 'PageTables: 3788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 335968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.868 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.869 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:48.870 nr_hugepages=1024 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:48.870 resv_hugepages=0 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.870 surplus_hugepages=0 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.870 anon_hugepages=0 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 7993268 kB' 'MemAvailable: 9520996 kB' 'Buffers: 2436 kB' 'Cached: 1739224 kB' 'SwapCached: 0 kB' 'Active: 488920 kB' 'Inactive: 1368996 kB' 'Active(anon): 126720 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 117852 kB' 'Mapped: 47832 kB' 'Shmem: 10464 kB' 'KReclaimable: 66944 kB' 'Slab: 139704 kB' 'SReclaimable: 66944 kB' 'SUnreclaim: 72760 kB' 'KernelStack: 6304 kB' 'PageTables: 3788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461020 kB' 'Committed_AS: 335968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.870 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.871 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241984 kB' 'MemFree: 7993268 kB' 'MemUsed: 4248716 kB' 'SwapCached: 0 kB' 'Active: 488976 kB' 'Inactive: 1368996 kB' 'Active(anon): 126776 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1368996 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 1741660 kB' 'Mapped: 47832 kB' 'AnonPages: 118184 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 3788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66944 kB' 'Slab: 139704 kB' 'SReclaimable: 66944 kB' 'SUnreclaim: 72760 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.872 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.873 node0=1024 expecting 1024 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:48.873 00:03:48.873 real 0m1.041s 00:03:48.873 user 0m0.499s 00:03:48.873 sys 0m0.583s 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.873 16:19:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:48.873 ************************************ 00:03:48.873 END TEST no_shrink_alloc 00:03:48.873 ************************************ 00:03:48.873 16:19:06 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:48.873 16:19:06 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:48.873 16:19:06 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:48.873 16:19:06 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:48.873 16:19:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.873 16:19:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:48.873 16:19:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.873 16:19:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:48.873 16:19:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:48.873 16:19:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:48.873 00:03:48.873 real 0m4.607s 00:03:48.873 user 0m2.237s 00:03:48.873 sys 0m2.477s 00:03:48.873 16:19:06 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.873 ************************************ 00:03:48.873 END TEST hugepages 00:03:48.873 ************************************ 00:03:48.873 16:19:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:48.873 16:19:07 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:48.873 16:19:07 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:48.873 16:19:07 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.873 16:19:07 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.873 16:19:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:48.873 ************************************ 00:03:48.873 START TEST driver 00:03:48.873 ************************************ 00:03:48.873 16:19:07 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:49.132 * Looking for test storage... 00:03:49.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:49.132 16:19:07 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:49.132 16:19:07 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:49.132 16:19:07 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:49.698 16:19:07 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:49.698 16:19:07 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:49.698 16:19:07 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.698 16:19:07 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:49.698 ************************************ 00:03:49.698 START TEST guess_driver 00:03:49.698 ************************************ 00:03:49.698 16:19:07 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:49.698 16:19:07 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:49.698 16:19:07 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:49.698 16:19:07 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:49.698 16:19:07 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:49.698 16:19:07 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:49.698 16:19:07 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:49.698 16:19:07 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:49.698 16:19:07 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:49.699 16:19:07 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:49.699 16:19:07 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:49.699 16:19:07 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:03:49.699 16:19:07 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:03:49.699 16:19:07 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:49.699 16:19:07 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:49.699 16:19:07 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:49.699 16:19:07 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:49.699 16:19:07 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:49.699 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:49.699 16:19:07 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:49.699 16:19:07 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:49.699 16:19:07 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:49.699 Looking for driver=uio_pci_generic 00:03:49.699 16:19:07 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:49.699 16:19:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.699 16:19:07 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:49.699 16:19:07 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.699 16:19:07 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:50.265 16:19:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:50.265 16:19:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:03:50.265 16:19:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.265 16:19:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.265 16:19:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:50.265 16:19:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.265 16:19:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:50.265 16:19:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:50.265 16:19:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:50.522 16:19:08 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:50.522 16:19:08 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:50.522 16:19:08 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:50.522 16:19:08 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:51.087 00:03:51.087 real 0m1.420s 00:03:51.087 user 0m0.531s 00:03:51.087 sys 0m0.902s 00:03:51.087 16:19:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:51.087 ************************************ 00:03:51.087 END TEST guess_driver 00:03:51.087 16:19:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:51.087 ************************************ 00:03:51.087 16:19:09 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:51.087 00:03:51.087 real 0m2.110s 00:03:51.087 user 0m0.763s 00:03:51.087 sys 0m1.410s 00:03:51.087 16:19:09 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:51.087 16:19:09 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:51.087 ************************************ 00:03:51.087 END TEST driver 00:03:51.087 ************************************ 00:03:51.087 16:19:09 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:51.087 16:19:09 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:51.087 16:19:09 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.087 16:19:09 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.087 16:19:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:51.087 ************************************ 00:03:51.087 START TEST devices 00:03:51.087 ************************************ 00:03:51.087 16:19:09 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:51.087 * Looking for test storage... 00:03:51.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:51.087 16:19:09 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:51.087 16:19:09 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:51.087 16:19:09 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:51.087 16:19:09 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:52.020 16:19:09 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:52.020 16:19:09 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:52.020 16:19:09 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:52.020 16:19:09 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:52.020 16:19:09 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:52.020 16:19:09 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:52.020 16:19:09 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:52.020 16:19:09 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:52.020 16:19:09 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:52.020 16:19:09 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:52.020 16:19:09 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:03:52.020 16:19:09 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:03:52.020 16:19:09 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:52.020 16:19:09 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:52.020 16:19:09 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:52.020 16:19:09 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:03:52.020 16:19:09 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:03:52.020 16:19:09 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:03:52.020 16:19:09 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:52.020 16:19:09 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:52.020 16:19:09 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:52.020 16:19:09 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:52.020 16:19:09 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:52.020 16:19:09 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:52.021 16:19:09 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:52.021 16:19:09 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:52.021 16:19:09 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:52.021 16:19:09 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:52.021 16:19:09 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:52.021 16:19:09 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:52.021 16:19:09 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:52.021 16:19:09 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:52.021 16:19:09 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:52.021 16:19:09 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:52.021 16:19:10 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:52.021 16:19:10 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:52.021 16:19:10 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:52.021 No valid GPT data, bailing 00:03:52.021 16:19:10 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:52.021 16:19:10 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:52.021 16:19:10 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:52.021 16:19:10 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:52.021 16:19:10 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:52.021 16:19:10 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:52.021 16:19:10 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:52.021 16:19:10 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:52.021 16:19:10 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:52.021 16:19:10 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:52.021 16:19:10 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:52.021 16:19:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:03:52.021 16:19:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:52.021 16:19:10 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:52.021 16:19:10 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:52.021 16:19:10 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:03:52.021 16:19:10 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:03:52.021 16:19:10 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:03:52.021 No valid GPT data, bailing 00:03:52.021 16:19:10 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:03:52.021 16:19:10 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:52.021 16:19:10 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:52.021 16:19:10 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:03:52.021 16:19:10 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:03:52.021 16:19:10 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:03:52.021 16:19:10 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:52.021 16:19:10 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:52.021 16:19:10 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:52.021 16:19:10 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:52.021 16:19:10 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:52.021 16:19:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:03:52.021 16:19:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:52.021 16:19:10 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:52.021 16:19:10 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:52.021 16:19:10 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:03:52.021 16:19:10 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:03:52.021 16:19:10 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:03:52.021 No valid GPT data, bailing 00:03:52.021 16:19:10 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:03:52.279 16:19:10 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:52.279 16:19:10 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:52.279 16:19:10 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:03:52.279 16:19:10 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:03:52.279 16:19:10 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:03:52.279 16:19:10 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:52.279 16:19:10 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:52.279 16:19:10 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:52.279 16:19:10 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:52.279 16:19:10 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:52.279 16:19:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:52.279 16:19:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:52.279 16:19:10 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:03:52.279 16:19:10 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:52.279 16:19:10 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:52.279 16:19:10 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:03:52.279 16:19:10 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:52.279 No valid GPT data, bailing 00:03:52.279 16:19:10 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:52.279 16:19:10 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:52.279 16:19:10 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:52.279 16:19:10 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:52.279 16:19:10 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:52.279 16:19:10 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:52.279 16:19:10 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:03:52.279 16:19:10 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:52.279 16:19:10 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:52.279 16:19:10 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:03:52.279 16:19:10 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:03:52.279 16:19:10 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:52.279 16:19:10 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:52.279 16:19:10 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:52.279 16:19:10 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.279 16:19:10 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:52.279 ************************************ 00:03:52.279 START TEST nvme_mount 00:03:52.279 ************************************ 00:03:52.279 16:19:10 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:52.279 16:19:10 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:52.279 16:19:10 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:52.279 16:19:10 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:52.279 16:19:10 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:52.279 16:19:10 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:52.279 16:19:10 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:52.279 16:19:10 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:52.279 16:19:10 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:52.279 16:19:10 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:52.279 16:19:10 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:52.279 16:19:10 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:52.279 16:19:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:52.279 16:19:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:52.279 16:19:10 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:52.279 16:19:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:52.279 16:19:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:52.279 16:19:10 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:52.279 16:19:10 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:52.279 16:19:10 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:53.232 Creating new GPT entries in memory. 00:03:53.232 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:53.232 other utilities. 00:03:53.232 16:19:11 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:53.232 16:19:11 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:53.232 16:19:11 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:53.232 16:19:11 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:53.232 16:19:11 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:54.167 Creating new GPT entries in memory. 00:03:54.167 The operation has completed successfully. 00:03:54.167 16:19:12 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:54.167 16:19:12 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:54.167 16:19:12 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 58911 00:03:54.425 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.425 16:19:12 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:03:54.425 16:19:12 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.425 16:19:12 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:54.425 16:19:12 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:54.425 16:19:12 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.425 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:54.425 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:54.425 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:54.425 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.425 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:54.425 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:54.425 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:54.425 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:54.425 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:54.425 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.425 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:54.425 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:54.425 16:19:12 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.425 16:19:12 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:54.425 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.425 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:54.425 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:54.425 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.425 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.425 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.682 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.682 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.682 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.682 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.940 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.940 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:54.940 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.940 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:54.940 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:54.940 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:54.940 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.940 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.940 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:54.940 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:54.940 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:54.940 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:54.940 16:19:12 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:55.198 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:55.198 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:55.198 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:55.198 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:55.198 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:03:55.198 16:19:13 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:03:55.198 16:19:13 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:55.198 16:19:13 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:55.198 16:19:13 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:55.198 16:19:13 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:55.198 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:55.198 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:55.198 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:55.198 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:55.198 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:55.198 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:55.198 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:55.198 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:55.198 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:55.198 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.198 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:55.198 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:55.198 16:19:13 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.198 16:19:13 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:55.455 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:55.455 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:55.456 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:55.456 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.456 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:55.456 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.456 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:55.456 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.714 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:55.714 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.714 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:55.714 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:55.714 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:55.714 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:55.714 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:55.714 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:55.714 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:03:55.714 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:55.714 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:55.714 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:55.714 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:55.714 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:55.714 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:55.714 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:55.714 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.714 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:55.714 16:19:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:55.714 16:19:13 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.714 16:19:13 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:55.972 16:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:55.973 16:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:55.973 16:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:55.973 16:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.973 16:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:55.973 16:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.973 16:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:55.973 16:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.231 16:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:56.231 16:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.231 16:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:56.231 16:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:56.231 16:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:56.231 16:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:56.231 16:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:56.231 16:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:56.231 16:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:56.231 16:19:14 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:56.231 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:56.231 00:03:56.231 real 0m4.014s 00:03:56.231 user 0m0.746s 00:03:56.231 sys 0m1.015s 00:03:56.231 16:19:14 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.231 16:19:14 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:56.231 ************************************ 00:03:56.231 END TEST nvme_mount 00:03:56.231 ************************************ 00:03:56.231 16:19:14 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:56.231 16:19:14 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:56.231 16:19:14 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.231 16:19:14 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.231 16:19:14 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:56.231 ************************************ 00:03:56.231 START TEST dm_mount 00:03:56.231 ************************************ 00:03:56.231 16:19:14 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:56.231 16:19:14 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:56.231 16:19:14 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:56.231 16:19:14 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:56.231 16:19:14 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:56.231 16:19:14 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:56.231 16:19:14 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:56.231 16:19:14 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:56.231 16:19:14 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:56.231 16:19:14 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:56.231 16:19:14 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:56.231 16:19:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:56.231 16:19:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.231 16:19:14 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:56.231 16:19:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:56.231 16:19:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.231 16:19:14 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:56.231 16:19:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:56.231 16:19:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:56.231 16:19:14 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:56.231 16:19:14 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:56.231 16:19:14 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:57.608 Creating new GPT entries in memory. 00:03:57.608 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:57.608 other utilities. 00:03:57.608 16:19:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:57.608 16:19:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:57.608 16:19:15 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:57.608 16:19:15 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:57.608 16:19:15 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:58.542 Creating new GPT entries in memory. 00:03:58.542 The operation has completed successfully. 00:03:58.542 16:19:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:58.542 16:19:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:58.542 16:19:16 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:58.542 16:19:16 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:58.542 16:19:16 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:03:59.478 The operation has completed successfully. 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 59347 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.478 16:19:17 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:59.736 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:59.736 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:59.736 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:59.736 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.736 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:59.736 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.736 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:59.736 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.993 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:59.993 16:19:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.993 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:59.993 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:03:59.993 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:59.993 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:59.993 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:59.993 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:59.993 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:59.993 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:59.993 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:59.993 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:59.993 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:59.993 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:59.993 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:59.993 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:59.993 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.993 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:59.993 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:59.993 16:19:18 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.993 16:19:18 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:00.250 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:00.250 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:00.250 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:00.250 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.250 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:00.250 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.250 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:00.250 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.523 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:00.523 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.523 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:00.523 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:00.523 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:00.523 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:00.523 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:00.523 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:00.523 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:00.523 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:00.523 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:00.523 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:00.523 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:00.523 16:19:18 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:00.523 00:04:00.523 real 0m4.201s 00:04:00.523 user 0m0.462s 00:04:00.523 sys 0m0.690s 00:04:00.523 16:19:18 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.523 16:19:18 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:00.523 ************************************ 00:04:00.523 END TEST dm_mount 00:04:00.523 ************************************ 00:04:00.523 16:19:18 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:00.523 16:19:18 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:00.523 16:19:18 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:00.523 16:19:18 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:00.523 16:19:18 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:00.523 16:19:18 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:00.523 16:19:18 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:00.523 16:19:18 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:00.780 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:00.780 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:00.780 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:00.780 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:00.780 16:19:18 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:00.780 16:19:18 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:00.780 16:19:18 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:00.780 16:19:18 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:00.780 16:19:18 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:00.780 16:19:18 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:00.780 16:19:18 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:00.780 00:04:00.780 real 0m9.730s 00:04:00.780 user 0m1.857s 00:04:00.780 sys 0m2.291s 00:04:00.780 16:19:18 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.780 16:19:18 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:00.780 ************************************ 00:04:00.780 END TEST devices 00:04:00.780 ************************************ 00:04:00.780 16:19:18 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:00.780 00:04:00.780 real 0m21.499s 00:04:00.780 user 0m7.087s 00:04:00.780 sys 0m8.934s 00:04:00.780 16:19:18 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.780 16:19:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:00.780 ************************************ 00:04:00.780 END TEST setup.sh 00:04:00.780 ************************************ 00:04:01.036 16:19:18 -- common/autotest_common.sh@1142 -- # return 0 00:04:01.036 16:19:18 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:01.601 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.601 Hugepages 00:04:01.601 node hugesize free / total 00:04:01.601 node0 1048576kB 0 / 0 00:04:01.601 node0 2048kB 2048 / 2048 00:04:01.601 00:04:01.601 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:01.601 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:01.601 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:01.858 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:01.858 16:19:19 -- spdk/autotest.sh@130 -- # uname -s 00:04:01.858 16:19:19 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:01.858 16:19:19 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:01.858 16:19:19 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:02.422 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.422 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:02.678 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:02.678 16:19:20 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:03.607 16:19:21 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:03.607 16:19:21 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:03.607 16:19:21 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:03.607 16:19:21 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:03.607 16:19:21 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:03.607 16:19:21 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:03.607 16:19:21 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:03.607 16:19:21 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:03.607 16:19:21 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:03.607 16:19:21 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:03.607 16:19:21 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:03.607 16:19:21 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:04.170 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.170 Waiting for block devices as requested 00:04:04.170 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:04.170 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:04.170 16:19:22 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:04.170 16:19:22 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:04.170 16:19:22 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:04.170 16:19:22 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:04.170 16:19:22 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:04.170 16:19:22 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:04.170 16:19:22 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:04.170 16:19:22 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:04.170 16:19:22 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:04.170 16:19:22 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:04.170 16:19:22 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:04.170 16:19:22 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:04.170 16:19:22 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:04.170 16:19:22 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:04.170 16:19:22 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:04.170 16:19:22 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:04.170 16:19:22 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:04.170 16:19:22 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:04.170 16:19:22 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:04.170 16:19:22 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:04.170 16:19:22 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:04.170 16:19:22 -- common/autotest_common.sh@1557 -- # continue 00:04:04.170 16:19:22 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:04.170 16:19:22 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:04.170 16:19:22 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:04.170 16:19:22 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:04.170 16:19:22 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:04.170 16:19:22 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:04.428 16:19:22 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:04.428 16:19:22 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:04.428 16:19:22 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:04.428 16:19:22 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:04.428 16:19:22 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:04.428 16:19:22 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:04.428 16:19:22 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:04.428 16:19:22 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:04.428 16:19:22 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:04.428 16:19:22 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:04.428 16:19:22 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:04.428 16:19:22 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:04.428 16:19:22 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:04.428 16:19:22 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:04.428 16:19:22 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:04.428 16:19:22 -- common/autotest_common.sh@1557 -- # continue 00:04:04.428 16:19:22 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:04.428 16:19:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:04.428 16:19:22 -- common/autotest_common.sh@10 -- # set +x 00:04:04.428 16:19:22 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:04.428 16:19:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:04.428 16:19:22 -- common/autotest_common.sh@10 -- # set +x 00:04:04.428 16:19:22 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:04.997 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.997 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.254 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:05.254 16:19:23 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:05.254 16:19:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:05.254 16:19:23 -- common/autotest_common.sh@10 -- # set +x 00:04:05.254 16:19:23 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:05.254 16:19:23 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:05.254 16:19:23 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:05.254 16:19:23 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:05.254 16:19:23 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:05.254 16:19:23 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:05.254 16:19:23 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:05.254 16:19:23 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:05.254 16:19:23 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:05.254 16:19:23 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:05.254 16:19:23 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:05.254 16:19:23 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:05.254 16:19:23 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:05.254 16:19:23 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:05.254 16:19:23 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:05.254 16:19:23 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:05.254 16:19:23 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:05.254 16:19:23 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:05.254 16:19:23 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:05.254 16:19:23 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:05.254 16:19:23 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:05.254 16:19:23 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:05.254 16:19:23 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:05.254 16:19:23 -- common/autotest_common.sh@1593 -- # return 0 00:04:05.254 16:19:23 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:05.254 16:19:23 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:05.254 16:19:23 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:05.254 16:19:23 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:05.254 16:19:23 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:05.254 16:19:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:05.254 16:19:23 -- common/autotest_common.sh@10 -- # set +x 00:04:05.254 16:19:23 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:05.254 16:19:23 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:05.254 16:19:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.254 16:19:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.254 16:19:23 -- common/autotest_common.sh@10 -- # set +x 00:04:05.254 ************************************ 00:04:05.254 START TEST env 00:04:05.254 ************************************ 00:04:05.254 16:19:23 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:05.511 * Looking for test storage... 00:04:05.511 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:05.511 16:19:23 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:05.511 16:19:23 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.511 16:19:23 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.511 16:19:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.511 ************************************ 00:04:05.511 START TEST env_memory 00:04:05.511 ************************************ 00:04:05.511 16:19:23 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:05.511 00:04:05.511 00:04:05.511 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.511 http://cunit.sourceforge.net/ 00:04:05.511 00:04:05.511 00:04:05.511 Suite: memory 00:04:05.511 Test: alloc and free memory map ...[2024-07-21 16:19:23.531013] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:05.511 passed 00:04:05.511 Test: mem map translation ...[2024-07-21 16:19:23.561903] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:05.511 [2024-07-21 16:19:23.561951] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:05.511 [2024-07-21 16:19:23.562015] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:05.511 [2024-07-21 16:19:23.562025] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:05.511 passed 00:04:05.511 Test: mem map registration ...[2024-07-21 16:19:23.625723] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:05.511 [2024-07-21 16:19:23.625756] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:05.511 passed 00:04:05.511 Test: mem map adjacent registrations ...passed 00:04:05.511 00:04:05.511 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.511 suites 1 1 n/a 0 0 00:04:05.511 tests 4 4 4 0 0 00:04:05.511 asserts 152 152 152 0 n/a 00:04:05.511 00:04:05.511 Elapsed time = 0.213 seconds 00:04:05.511 00:04:05.511 real 0m0.228s 00:04:05.511 user 0m0.213s 00:04:05.511 sys 0m0.013s 00:04:05.511 16:19:23 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.511 16:19:23 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:05.511 ************************************ 00:04:05.511 END TEST env_memory 00:04:05.511 ************************************ 00:04:05.769 16:19:23 env -- common/autotest_common.sh@1142 -- # return 0 00:04:05.769 16:19:23 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:05.769 16:19:23 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.769 16:19:23 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.769 16:19:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.769 ************************************ 00:04:05.769 START TEST env_vtophys 00:04:05.769 ************************************ 00:04:05.769 16:19:23 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:05.769 EAL: lib.eal log level changed from notice to debug 00:04:05.769 EAL: Detected lcore 0 as core 0 on socket 0 00:04:05.769 EAL: Detected lcore 1 as core 0 on socket 0 00:04:05.769 EAL: Detected lcore 2 as core 0 on socket 0 00:04:05.769 EAL: Detected lcore 3 as core 0 on socket 0 00:04:05.769 EAL: Detected lcore 4 as core 0 on socket 0 00:04:05.769 EAL: Detected lcore 5 as core 0 on socket 0 00:04:05.769 EAL: Detected lcore 6 as core 0 on socket 0 00:04:05.769 EAL: Detected lcore 7 as core 0 on socket 0 00:04:05.769 EAL: Detected lcore 8 as core 0 on socket 0 00:04:05.769 EAL: Detected lcore 9 as core 0 on socket 0 00:04:05.769 EAL: Maximum logical cores by configuration: 128 00:04:05.769 EAL: Detected CPU lcores: 10 00:04:05.769 EAL: Detected NUMA nodes: 1 00:04:05.769 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:05.769 EAL: Detected shared linkage of DPDK 00:04:05.769 EAL: No shared files mode enabled, IPC will be disabled 00:04:05.769 EAL: Selected IOVA mode 'PA' 00:04:05.769 EAL: Probing VFIO support... 00:04:05.769 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:05.769 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:05.769 EAL: Ask a virtual area of 0x2e000 bytes 00:04:05.769 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:05.769 EAL: Setting up physically contiguous memory... 00:04:05.769 EAL: Setting maximum number of open files to 524288 00:04:05.769 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:05.769 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:05.769 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.769 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:05.769 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:05.769 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.769 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:05.769 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:05.769 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.769 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:05.769 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:05.769 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.769 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:05.769 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:05.769 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.769 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:05.769 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:05.769 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.769 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:05.769 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:05.769 EAL: Ask a virtual area of 0x61000 bytes 00:04:05.769 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:05.769 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:05.769 EAL: Ask a virtual area of 0x400000000 bytes 00:04:05.769 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:05.769 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:05.769 EAL: Hugepages will be freed exactly as allocated. 00:04:05.769 EAL: No shared files mode enabled, IPC is disabled 00:04:05.769 EAL: No shared files mode enabled, IPC is disabled 00:04:05.769 EAL: TSC frequency is ~2200000 KHz 00:04:05.769 EAL: Main lcore 0 is ready (tid=7f643e84ca00;cpuset=[0]) 00:04:05.769 EAL: Trying to obtain current memory policy. 00:04:05.769 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.769 EAL: Restoring previous memory policy: 0 00:04:05.769 EAL: request: mp_malloc_sync 00:04:05.769 EAL: No shared files mode enabled, IPC is disabled 00:04:05.769 EAL: Heap on socket 0 was expanded by 2MB 00:04:05.769 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:05.769 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:05.769 EAL: Mem event callback 'spdk:(nil)' registered 00:04:05.769 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:05.769 00:04:05.769 00:04:05.769 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.769 http://cunit.sourceforge.net/ 00:04:05.769 00:04:05.769 00:04:05.769 Suite: components_suite 00:04:05.769 Test: vtophys_malloc_test ...passed 00:04:05.769 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:05.769 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.769 EAL: Restoring previous memory policy: 4 00:04:05.769 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.769 EAL: request: mp_malloc_sync 00:04:05.769 EAL: No shared files mode enabled, IPC is disabled 00:04:05.769 EAL: Heap on socket 0 was expanded by 4MB 00:04:05.769 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.769 EAL: request: mp_malloc_sync 00:04:05.769 EAL: No shared files mode enabled, IPC is disabled 00:04:05.769 EAL: Heap on socket 0 was shrunk by 4MB 00:04:05.769 EAL: Trying to obtain current memory policy. 00:04:05.769 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.769 EAL: Restoring previous memory policy: 4 00:04:05.769 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.769 EAL: request: mp_malloc_sync 00:04:05.769 EAL: No shared files mode enabled, IPC is disabled 00:04:05.769 EAL: Heap on socket 0 was expanded by 6MB 00:04:05.769 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.769 EAL: request: mp_malloc_sync 00:04:05.769 EAL: No shared files mode enabled, IPC is disabled 00:04:05.769 EAL: Heap on socket 0 was shrunk by 6MB 00:04:05.769 EAL: Trying to obtain current memory policy. 00:04:05.769 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.769 EAL: Restoring previous memory policy: 4 00:04:05.769 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.769 EAL: request: mp_malloc_sync 00:04:05.769 EAL: No shared files mode enabled, IPC is disabled 00:04:05.769 EAL: Heap on socket 0 was expanded by 10MB 00:04:05.769 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.769 EAL: request: mp_malloc_sync 00:04:05.769 EAL: No shared files mode enabled, IPC is disabled 00:04:05.769 EAL: Heap on socket 0 was shrunk by 10MB 00:04:05.769 EAL: Trying to obtain current memory policy. 00:04:05.769 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.769 EAL: Restoring previous memory policy: 4 00:04:05.769 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.769 EAL: request: mp_malloc_sync 00:04:05.769 EAL: No shared files mode enabled, IPC is disabled 00:04:05.769 EAL: Heap on socket 0 was expanded by 18MB 00:04:05.769 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.769 EAL: request: mp_malloc_sync 00:04:05.769 EAL: No shared files mode enabled, IPC is disabled 00:04:05.769 EAL: Heap on socket 0 was shrunk by 18MB 00:04:05.769 EAL: Trying to obtain current memory policy. 00:04:05.770 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.770 EAL: Restoring previous memory policy: 4 00:04:05.770 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.770 EAL: request: mp_malloc_sync 00:04:05.770 EAL: No shared files mode enabled, IPC is disabled 00:04:05.770 EAL: Heap on socket 0 was expanded by 34MB 00:04:05.770 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.770 EAL: request: mp_malloc_sync 00:04:05.770 EAL: No shared files mode enabled, IPC is disabled 00:04:05.770 EAL: Heap on socket 0 was shrunk by 34MB 00:04:05.770 EAL: Trying to obtain current memory policy. 00:04:05.770 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.770 EAL: Restoring previous memory policy: 4 00:04:05.770 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.770 EAL: request: mp_malloc_sync 00:04:05.770 EAL: No shared files mode enabled, IPC is disabled 00:04:05.770 EAL: Heap on socket 0 was expanded by 66MB 00:04:05.770 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.028 EAL: request: mp_malloc_sync 00:04:06.028 EAL: No shared files mode enabled, IPC is disabled 00:04:06.028 EAL: Heap on socket 0 was shrunk by 66MB 00:04:06.028 EAL: Trying to obtain current memory policy. 00:04:06.028 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.028 EAL: Restoring previous memory policy: 4 00:04:06.028 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.028 EAL: request: mp_malloc_sync 00:04:06.028 EAL: No shared files mode enabled, IPC is disabled 00:04:06.028 EAL: Heap on socket 0 was expanded by 130MB 00:04:06.028 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.028 EAL: request: mp_malloc_sync 00:04:06.028 EAL: No shared files mode enabled, IPC is disabled 00:04:06.028 EAL: Heap on socket 0 was shrunk by 130MB 00:04:06.028 EAL: Trying to obtain current memory policy. 00:04:06.028 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.028 EAL: Restoring previous memory policy: 4 00:04:06.028 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.028 EAL: request: mp_malloc_sync 00:04:06.028 EAL: No shared files mode enabled, IPC is disabled 00:04:06.028 EAL: Heap on socket 0 was expanded by 258MB 00:04:06.028 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.028 EAL: request: mp_malloc_sync 00:04:06.028 EAL: No shared files mode enabled, IPC is disabled 00:04:06.028 EAL: Heap on socket 0 was shrunk by 258MB 00:04:06.028 EAL: Trying to obtain current memory policy. 00:04:06.028 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.286 EAL: Restoring previous memory policy: 4 00:04:06.286 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.286 EAL: request: mp_malloc_sync 00:04:06.286 EAL: No shared files mode enabled, IPC is disabled 00:04:06.286 EAL: Heap on socket 0 was expanded by 514MB 00:04:06.286 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.544 EAL: request: mp_malloc_sync 00:04:06.544 EAL: No shared files mode enabled, IPC is disabled 00:04:06.544 EAL: Heap on socket 0 was shrunk by 514MB 00:04:06.544 EAL: Trying to obtain current memory policy. 00:04:06.544 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.801 EAL: Restoring previous memory policy: 4 00:04:06.801 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.801 EAL: request: mp_malloc_sync 00:04:06.801 EAL: No shared files mode enabled, IPC is disabled 00:04:06.801 EAL: Heap on socket 0 was expanded by 1026MB 00:04:07.062 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.636 passed 00:04:07.636 00:04:07.636 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.636 suites 1 1 n/a 0 0 00:04:07.636 tests 2 2 2 0 0 00:04:07.636 asserts 5218 5218 5218 0 n/a 00:04:07.636 00:04:07.636 Elapsed time = 1.631 seconds 00:04:07.636 EAL: request: mp_malloc_sync 00:04:07.636 EAL: No shared files mode enabled, IPC is disabled 00:04:07.636 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:07.636 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.636 EAL: request: mp_malloc_sync 00:04:07.636 EAL: No shared files mode enabled, IPC is disabled 00:04:07.636 EAL: Heap on socket 0 was shrunk by 2MB 00:04:07.636 EAL: No shared files mode enabled, IPC is disabled 00:04:07.636 EAL: No shared files mode enabled, IPC is disabled 00:04:07.636 EAL: No shared files mode enabled, IPC is disabled 00:04:07.636 00:04:07.636 real 0m1.834s 00:04:07.636 user 0m1.054s 00:04:07.636 sys 0m0.645s 00:04:07.636 16:19:25 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.636 16:19:25 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:07.636 ************************************ 00:04:07.636 END TEST env_vtophys 00:04:07.636 ************************************ 00:04:07.636 16:19:25 env -- common/autotest_common.sh@1142 -- # return 0 00:04:07.636 16:19:25 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:07.636 16:19:25 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.636 16:19:25 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.636 16:19:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.636 ************************************ 00:04:07.636 START TEST env_pci 00:04:07.636 ************************************ 00:04:07.636 16:19:25 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:07.636 00:04:07.636 00:04:07.636 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.636 http://cunit.sourceforge.net/ 00:04:07.636 00:04:07.636 00:04:07.636 Suite: pci 00:04:07.636 Test: pci_hook ...[2024-07-21 16:19:25.669978] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60546 has claimed it 00:04:07.636 passed 00:04:07.636 00:04:07.636 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.636 suites 1 1 n/a 0 0 00:04:07.636 tests 1 1 1 0 0 00:04:07.636 asserts 25 25 25 0 n/a 00:04:07.636 00:04:07.636 Elapsed time = 0.002 seconds 00:04:07.636 EAL: Cannot find device (10000:00:01.0) 00:04:07.636 EAL: Failed to attach device on primary process 00:04:07.636 00:04:07.636 real 0m0.026s 00:04:07.636 user 0m0.010s 00:04:07.636 sys 0m0.014s 00:04:07.636 16:19:25 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.636 16:19:25 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:07.636 ************************************ 00:04:07.636 END TEST env_pci 00:04:07.636 ************************************ 00:04:07.636 16:19:25 env -- common/autotest_common.sh@1142 -- # return 0 00:04:07.636 16:19:25 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:07.636 16:19:25 env -- env/env.sh@15 -- # uname 00:04:07.636 16:19:25 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:07.636 16:19:25 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:07.636 16:19:25 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:07.636 16:19:25 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:07.636 16:19:25 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.636 16:19:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.636 ************************************ 00:04:07.636 START TEST env_dpdk_post_init 00:04:07.636 ************************************ 00:04:07.636 16:19:25 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:07.636 EAL: Detected CPU lcores: 10 00:04:07.636 EAL: Detected NUMA nodes: 1 00:04:07.636 EAL: Detected shared linkage of DPDK 00:04:07.636 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:07.636 EAL: Selected IOVA mode 'PA' 00:04:07.894 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:07.894 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:07.894 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:07.894 Starting DPDK initialization... 00:04:07.894 Starting SPDK post initialization... 00:04:07.894 SPDK NVMe probe 00:04:07.894 Attaching to 0000:00:10.0 00:04:07.894 Attaching to 0000:00:11.0 00:04:07.894 Attached to 0000:00:10.0 00:04:07.894 Attached to 0000:00:11.0 00:04:07.894 Cleaning up... 00:04:07.894 00:04:07.894 real 0m0.177s 00:04:07.894 user 0m0.046s 00:04:07.894 sys 0m0.032s 00:04:07.894 16:19:25 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.894 16:19:25 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:07.894 ************************************ 00:04:07.894 END TEST env_dpdk_post_init 00:04:07.894 ************************************ 00:04:07.894 16:19:25 env -- common/autotest_common.sh@1142 -- # return 0 00:04:07.894 16:19:25 env -- env/env.sh@26 -- # uname 00:04:07.894 16:19:25 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:07.894 16:19:25 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:07.894 16:19:25 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.894 16:19:25 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.894 16:19:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.894 ************************************ 00:04:07.894 START TEST env_mem_callbacks 00:04:07.894 ************************************ 00:04:07.894 16:19:25 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:07.894 EAL: Detected CPU lcores: 10 00:04:07.894 EAL: Detected NUMA nodes: 1 00:04:07.894 EAL: Detected shared linkage of DPDK 00:04:07.894 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:07.894 EAL: Selected IOVA mode 'PA' 00:04:08.152 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:08.152 00:04:08.152 00:04:08.152 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.152 http://cunit.sourceforge.net/ 00:04:08.152 00:04:08.152 00:04:08.152 Suite: memory 00:04:08.152 Test: test ... 00:04:08.152 register 0x200000200000 2097152 00:04:08.152 malloc 3145728 00:04:08.152 register 0x200000400000 4194304 00:04:08.152 buf 0x200000500000 len 3145728 PASSED 00:04:08.152 malloc 64 00:04:08.152 buf 0x2000004fff40 len 64 PASSED 00:04:08.152 malloc 4194304 00:04:08.152 register 0x200000800000 6291456 00:04:08.152 buf 0x200000a00000 len 4194304 PASSED 00:04:08.152 free 0x200000500000 3145728 00:04:08.152 free 0x2000004fff40 64 00:04:08.152 unregister 0x200000400000 4194304 PASSED 00:04:08.152 free 0x200000a00000 4194304 00:04:08.152 unregister 0x200000800000 6291456 PASSED 00:04:08.152 malloc 8388608 00:04:08.152 register 0x200000400000 10485760 00:04:08.152 buf 0x200000600000 len 8388608 PASSED 00:04:08.152 free 0x200000600000 8388608 00:04:08.152 unregister 0x200000400000 10485760 PASSED 00:04:08.152 passed 00:04:08.152 00:04:08.152 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.152 suites 1 1 n/a 0 0 00:04:08.152 tests 1 1 1 0 0 00:04:08.152 asserts 15 15 15 0 n/a 00:04:08.152 00:04:08.152 Elapsed time = 0.010 seconds 00:04:08.152 00:04:08.152 real 0m0.148s 00:04:08.152 user 0m0.019s 00:04:08.152 sys 0m0.028s 00:04:08.152 16:19:26 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.152 ************************************ 00:04:08.152 END TEST env_mem_callbacks 00:04:08.152 ************************************ 00:04:08.152 16:19:26 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:08.152 16:19:26 env -- common/autotest_common.sh@1142 -- # return 0 00:04:08.152 00:04:08.152 real 0m2.780s 00:04:08.152 user 0m1.453s 00:04:08.152 sys 0m0.966s 00:04:08.152 16:19:26 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.152 16:19:26 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.152 ************************************ 00:04:08.152 END TEST env 00:04:08.152 ************************************ 00:04:08.152 16:19:26 -- common/autotest_common.sh@1142 -- # return 0 00:04:08.152 16:19:26 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:08.152 16:19:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.152 16:19:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.152 16:19:26 -- common/autotest_common.sh@10 -- # set +x 00:04:08.152 ************************************ 00:04:08.152 START TEST rpc 00:04:08.152 ************************************ 00:04:08.152 16:19:26 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:08.152 * Looking for test storage... 00:04:08.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:08.152 16:19:26 rpc -- rpc/rpc.sh@65 -- # spdk_pid=60656 00:04:08.152 16:19:26 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:08.152 16:19:26 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:08.152 16:19:26 rpc -- rpc/rpc.sh@67 -- # waitforlisten 60656 00:04:08.152 16:19:26 rpc -- common/autotest_common.sh@829 -- # '[' -z 60656 ']' 00:04:08.152 16:19:26 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.152 16:19:26 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:08.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.152 16:19:26 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.152 16:19:26 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:08.152 16:19:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.409 [2024-07-21 16:19:26.382157] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:04:08.409 [2024-07-21 16:19:26.382291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60656 ] 00:04:08.409 [2024-07-21 16:19:26.523906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.665 [2024-07-21 16:19:26.681408] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:08.665 [2024-07-21 16:19:26.681494] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60656' to capture a snapshot of events at runtime. 00:04:08.665 [2024-07-21 16:19:26.681505] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:08.665 [2024-07-21 16:19:26.681517] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:08.665 [2024-07-21 16:19:26.681524] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60656 for offline analysis/debug. 00:04:08.665 [2024-07-21 16:19:26.681549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.229 16:19:27 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:09.229 16:19:27 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:09.229 16:19:27 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:09.229 16:19:27 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:09.229 16:19:27 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:09.229 16:19:27 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:09.229 16:19:27 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:09.229 16:19:27 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.229 16:19:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.229 ************************************ 00:04:09.229 START TEST rpc_integrity 00:04:09.229 ************************************ 00:04:09.229 16:19:27 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:09.229 16:19:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:09.229 16:19:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:09.229 16:19:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.229 16:19:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:09.229 16:19:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:09.229 16:19:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:09.500 16:19:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:09.500 16:19:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:09.500 16:19:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:09.500 16:19:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.500 16:19:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:09.500 16:19:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:09.500 16:19:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:09.500 16:19:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:09.500 16:19:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.500 16:19:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:09.500 16:19:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:09.500 { 00:04:09.500 "aliases": [ 00:04:09.500 "e9be80fd-9cd0-4cd2-8eb2-5619939dcc97" 00:04:09.500 ], 00:04:09.500 "assigned_rate_limits": { 00:04:09.500 "r_mbytes_per_sec": 0, 00:04:09.500 "rw_ios_per_sec": 0, 00:04:09.500 "rw_mbytes_per_sec": 0, 00:04:09.500 "w_mbytes_per_sec": 0 00:04:09.500 }, 00:04:09.500 "block_size": 512, 00:04:09.500 "claimed": false, 00:04:09.500 "driver_specific": {}, 00:04:09.500 "memory_domains": [ 00:04:09.500 { 00:04:09.500 "dma_device_id": "system", 00:04:09.500 "dma_device_type": 1 00:04:09.500 }, 00:04:09.500 { 00:04:09.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.500 "dma_device_type": 2 00:04:09.500 } 00:04:09.500 ], 00:04:09.500 "name": "Malloc0", 00:04:09.500 "num_blocks": 16384, 00:04:09.500 "product_name": "Malloc disk", 00:04:09.500 "supported_io_types": { 00:04:09.500 "abort": true, 00:04:09.500 "compare": false, 00:04:09.500 "compare_and_write": false, 00:04:09.500 "copy": true, 00:04:09.500 "flush": true, 00:04:09.500 "get_zone_info": false, 00:04:09.500 "nvme_admin": false, 00:04:09.500 "nvme_io": false, 00:04:09.500 "nvme_io_md": false, 00:04:09.500 "nvme_iov_md": false, 00:04:09.500 "read": true, 00:04:09.500 "reset": true, 00:04:09.500 "seek_data": false, 00:04:09.500 "seek_hole": false, 00:04:09.500 "unmap": true, 00:04:09.500 "write": true, 00:04:09.500 "write_zeroes": true, 00:04:09.500 "zcopy": true, 00:04:09.500 "zone_append": false, 00:04:09.500 "zone_management": false 00:04:09.500 }, 00:04:09.500 "uuid": "e9be80fd-9cd0-4cd2-8eb2-5619939dcc97", 00:04:09.500 "zoned": false 00:04:09.500 } 00:04:09.500 ]' 00:04:09.500 16:19:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:09.500 16:19:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:09.500 16:19:27 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:09.500 16:19:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:09.500 16:19:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.500 [2024-07-21 16:19:27.572821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:09.500 [2024-07-21 16:19:27.572905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:09.500 [2024-07-21 16:19:27.572937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb1fad0 00:04:09.500 [2024-07-21 16:19:27.572948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:09.500 [2024-07-21 16:19:27.575013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:09.500 [2024-07-21 16:19:27.575103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:09.500 Passthru0 00:04:09.500 16:19:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:09.500 16:19:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:09.500 16:19:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:09.500 16:19:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.500 16:19:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:09.500 16:19:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:09.500 { 00:04:09.500 "aliases": [ 00:04:09.500 "e9be80fd-9cd0-4cd2-8eb2-5619939dcc97" 00:04:09.500 ], 00:04:09.500 "assigned_rate_limits": { 00:04:09.500 "r_mbytes_per_sec": 0, 00:04:09.500 "rw_ios_per_sec": 0, 00:04:09.500 "rw_mbytes_per_sec": 0, 00:04:09.500 "w_mbytes_per_sec": 0 00:04:09.500 }, 00:04:09.500 "block_size": 512, 00:04:09.500 "claim_type": "exclusive_write", 00:04:09.500 "claimed": true, 00:04:09.500 "driver_specific": {}, 00:04:09.500 "memory_domains": [ 00:04:09.500 { 00:04:09.500 "dma_device_id": "system", 00:04:09.500 "dma_device_type": 1 00:04:09.500 }, 00:04:09.501 { 00:04:09.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.501 "dma_device_type": 2 00:04:09.501 } 00:04:09.501 ], 00:04:09.501 "name": "Malloc0", 00:04:09.501 "num_blocks": 16384, 00:04:09.501 "product_name": "Malloc disk", 00:04:09.501 "supported_io_types": { 00:04:09.501 "abort": true, 00:04:09.501 "compare": false, 00:04:09.501 "compare_and_write": false, 00:04:09.501 "copy": true, 00:04:09.501 "flush": true, 00:04:09.501 "get_zone_info": false, 00:04:09.501 "nvme_admin": false, 00:04:09.501 "nvme_io": false, 00:04:09.501 "nvme_io_md": false, 00:04:09.501 "nvme_iov_md": false, 00:04:09.501 "read": true, 00:04:09.501 "reset": true, 00:04:09.501 "seek_data": false, 00:04:09.501 "seek_hole": false, 00:04:09.501 "unmap": true, 00:04:09.501 "write": true, 00:04:09.501 "write_zeroes": true, 00:04:09.501 "zcopy": true, 00:04:09.501 "zone_append": false, 00:04:09.501 "zone_management": false 00:04:09.501 }, 00:04:09.501 "uuid": "e9be80fd-9cd0-4cd2-8eb2-5619939dcc97", 00:04:09.501 "zoned": false 00:04:09.501 }, 00:04:09.501 { 00:04:09.501 "aliases": [ 00:04:09.501 "2912a53e-47ca-502d-afbf-a009b8bf974d" 00:04:09.501 ], 00:04:09.501 "assigned_rate_limits": { 00:04:09.501 "r_mbytes_per_sec": 0, 00:04:09.501 "rw_ios_per_sec": 0, 00:04:09.501 "rw_mbytes_per_sec": 0, 00:04:09.501 "w_mbytes_per_sec": 0 00:04:09.501 }, 00:04:09.501 "block_size": 512, 00:04:09.501 "claimed": false, 00:04:09.501 "driver_specific": { 00:04:09.501 "passthru": { 00:04:09.501 "base_bdev_name": "Malloc0", 00:04:09.501 "name": "Passthru0" 00:04:09.501 } 00:04:09.501 }, 00:04:09.501 "memory_domains": [ 00:04:09.501 { 00:04:09.501 "dma_device_id": "system", 00:04:09.501 "dma_device_type": 1 00:04:09.501 }, 00:04:09.501 { 00:04:09.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.501 "dma_device_type": 2 00:04:09.501 } 00:04:09.501 ], 00:04:09.501 "name": "Passthru0", 00:04:09.501 "num_blocks": 16384, 00:04:09.501 "product_name": "passthru", 00:04:09.501 "supported_io_types": { 00:04:09.501 "abort": true, 00:04:09.501 "compare": false, 00:04:09.501 "compare_and_write": false, 00:04:09.501 "copy": true, 00:04:09.501 "flush": true, 00:04:09.501 "get_zone_info": false, 00:04:09.501 "nvme_admin": false, 00:04:09.501 "nvme_io": false, 00:04:09.501 "nvme_io_md": false, 00:04:09.501 "nvme_iov_md": false, 00:04:09.501 "read": true, 00:04:09.501 "reset": true, 00:04:09.501 "seek_data": false, 00:04:09.501 "seek_hole": false, 00:04:09.501 "unmap": true, 00:04:09.501 "write": true, 00:04:09.501 "write_zeroes": true, 00:04:09.501 "zcopy": true, 00:04:09.501 "zone_append": false, 00:04:09.501 "zone_management": false 00:04:09.501 }, 00:04:09.501 "uuid": "2912a53e-47ca-502d-afbf-a009b8bf974d", 00:04:09.501 "zoned": false 00:04:09.501 } 00:04:09.501 ]' 00:04:09.501 16:19:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:09.501 16:19:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:09.501 16:19:27 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:09.501 16:19:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:09.501 16:19:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.501 16:19:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:09.501 16:19:27 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:09.501 16:19:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:09.501 16:19:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.501 16:19:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:09.501 16:19:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:09.501 16:19:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:09.501 16:19:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.501 16:19:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:09.501 16:19:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:09.501 16:19:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:09.758 16:19:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:09.758 00:04:09.758 real 0m0.327s 00:04:09.758 user 0m0.208s 00:04:09.758 sys 0m0.041s 00:04:09.758 16:19:27 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.758 16:19:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.758 ************************************ 00:04:09.758 END TEST rpc_integrity 00:04:09.758 ************************************ 00:04:09.758 16:19:27 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:09.758 16:19:27 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:09.758 16:19:27 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:09.758 16:19:27 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.758 16:19:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.758 ************************************ 00:04:09.758 START TEST rpc_plugins 00:04:09.758 ************************************ 00:04:09.758 16:19:27 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:09.758 16:19:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:09.758 16:19:27 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:09.758 16:19:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.758 16:19:27 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:09.758 16:19:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:09.758 16:19:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:09.758 16:19:27 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:09.758 16:19:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.758 16:19:27 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:09.758 16:19:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:09.758 { 00:04:09.758 "aliases": [ 00:04:09.758 "7b693577-2c3c-4197-a507-6072c8134235" 00:04:09.758 ], 00:04:09.758 "assigned_rate_limits": { 00:04:09.758 "r_mbytes_per_sec": 0, 00:04:09.758 "rw_ios_per_sec": 0, 00:04:09.758 "rw_mbytes_per_sec": 0, 00:04:09.758 "w_mbytes_per_sec": 0 00:04:09.758 }, 00:04:09.758 "block_size": 4096, 00:04:09.758 "claimed": false, 00:04:09.758 "driver_specific": {}, 00:04:09.758 "memory_domains": [ 00:04:09.758 { 00:04:09.758 "dma_device_id": "system", 00:04:09.758 "dma_device_type": 1 00:04:09.758 }, 00:04:09.758 { 00:04:09.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.758 "dma_device_type": 2 00:04:09.758 } 00:04:09.758 ], 00:04:09.758 "name": "Malloc1", 00:04:09.758 "num_blocks": 256, 00:04:09.758 "product_name": "Malloc disk", 00:04:09.758 "supported_io_types": { 00:04:09.758 "abort": true, 00:04:09.758 "compare": false, 00:04:09.758 "compare_and_write": false, 00:04:09.758 "copy": true, 00:04:09.758 "flush": true, 00:04:09.758 "get_zone_info": false, 00:04:09.758 "nvme_admin": false, 00:04:09.758 "nvme_io": false, 00:04:09.758 "nvme_io_md": false, 00:04:09.758 "nvme_iov_md": false, 00:04:09.758 "read": true, 00:04:09.758 "reset": true, 00:04:09.758 "seek_data": false, 00:04:09.758 "seek_hole": false, 00:04:09.758 "unmap": true, 00:04:09.758 "write": true, 00:04:09.758 "write_zeroes": true, 00:04:09.758 "zcopy": true, 00:04:09.758 "zone_append": false, 00:04:09.758 "zone_management": false 00:04:09.759 }, 00:04:09.759 "uuid": "7b693577-2c3c-4197-a507-6072c8134235", 00:04:09.759 "zoned": false 00:04:09.759 } 00:04:09.759 ]' 00:04:09.759 16:19:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:09.759 16:19:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:09.759 16:19:27 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:09.759 16:19:27 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:09.759 16:19:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.759 16:19:27 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:09.759 16:19:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:09.759 16:19:27 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:09.759 16:19:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.759 16:19:27 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:09.759 16:19:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:09.759 16:19:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:09.759 16:19:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:09.759 00:04:09.759 real 0m0.160s 00:04:09.759 user 0m0.099s 00:04:09.759 sys 0m0.022s 00:04:09.759 16:19:27 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.759 ************************************ 00:04:09.759 END TEST rpc_plugins 00:04:09.759 ************************************ 00:04:09.759 16:19:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:10.016 16:19:27 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:10.016 16:19:27 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:10.016 16:19:27 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.016 16:19:27 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.016 16:19:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.016 ************************************ 00:04:10.016 START TEST rpc_trace_cmd_test 00:04:10.016 ************************************ 00:04:10.016 16:19:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:10.016 16:19:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:10.016 16:19:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:10.016 16:19:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.016 16:19:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:10.016 16:19:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.016 16:19:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:10.016 "bdev": { 00:04:10.016 "mask": "0x8", 00:04:10.016 "tpoint_mask": "0xffffffffffffffff" 00:04:10.016 }, 00:04:10.016 "bdev_nvme": { 00:04:10.016 "mask": "0x4000", 00:04:10.016 "tpoint_mask": "0x0" 00:04:10.016 }, 00:04:10.016 "blobfs": { 00:04:10.016 "mask": "0x80", 00:04:10.016 "tpoint_mask": "0x0" 00:04:10.016 }, 00:04:10.016 "dsa": { 00:04:10.016 "mask": "0x200", 00:04:10.016 "tpoint_mask": "0x0" 00:04:10.016 }, 00:04:10.016 "ftl": { 00:04:10.016 "mask": "0x40", 00:04:10.016 "tpoint_mask": "0x0" 00:04:10.016 }, 00:04:10.016 "iaa": { 00:04:10.016 "mask": "0x1000", 00:04:10.016 "tpoint_mask": "0x0" 00:04:10.016 }, 00:04:10.016 "iscsi_conn": { 00:04:10.016 "mask": "0x2", 00:04:10.016 "tpoint_mask": "0x0" 00:04:10.016 }, 00:04:10.016 "nvme_pcie": { 00:04:10.016 "mask": "0x800", 00:04:10.016 "tpoint_mask": "0x0" 00:04:10.016 }, 00:04:10.016 "nvme_tcp": { 00:04:10.016 "mask": "0x2000", 00:04:10.016 "tpoint_mask": "0x0" 00:04:10.016 }, 00:04:10.016 "nvmf_rdma": { 00:04:10.016 "mask": "0x10", 00:04:10.016 "tpoint_mask": "0x0" 00:04:10.016 }, 00:04:10.016 "nvmf_tcp": { 00:04:10.016 "mask": "0x20", 00:04:10.016 "tpoint_mask": "0x0" 00:04:10.016 }, 00:04:10.016 "scsi": { 00:04:10.016 "mask": "0x4", 00:04:10.016 "tpoint_mask": "0x0" 00:04:10.016 }, 00:04:10.017 "sock": { 00:04:10.017 "mask": "0x8000", 00:04:10.017 "tpoint_mask": "0x0" 00:04:10.017 }, 00:04:10.017 "thread": { 00:04:10.017 "mask": "0x400", 00:04:10.017 "tpoint_mask": "0x0" 00:04:10.017 }, 00:04:10.017 "tpoint_group_mask": "0x8", 00:04:10.017 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60656" 00:04:10.017 }' 00:04:10.017 16:19:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:10.017 16:19:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:10.017 16:19:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:10.017 16:19:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:10.017 16:19:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:10.017 16:19:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:10.017 16:19:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:10.274 16:19:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:10.274 16:19:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:10.274 16:19:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:10.274 00:04:10.274 real 0m0.275s 00:04:10.274 user 0m0.236s 00:04:10.274 sys 0m0.032s 00:04:10.274 ************************************ 00:04:10.274 16:19:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.274 16:19:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:10.274 END TEST rpc_trace_cmd_test 00:04:10.274 ************************************ 00:04:10.274 16:19:28 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:10.274 16:19:28 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:04:10.274 16:19:28 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:04:10.274 16:19:28 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.274 16:19:28 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.274 16:19:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.274 ************************************ 00:04:10.274 START TEST go_rpc 00:04:10.275 ************************************ 00:04:10.275 16:19:28 rpc.go_rpc -- common/autotest_common.sh@1123 -- # go_rpc 00:04:10.275 16:19:28 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:10.275 16:19:28 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:04:10.275 16:19:28 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:04:10.275 16:19:28 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:04:10.275 16:19:28 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:04:10.275 16:19:28 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.275 16:19:28 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.275 16:19:28 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.275 16:19:28 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:04:10.275 16:19:28 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:10.275 16:19:28 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["1efe2107-5f70-457d-9b0a-31177341fe69"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"1efe2107-5f70-457d-9b0a-31177341fe69","zoned":false}]' 00:04:10.275 16:19:28 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:04:10.532 16:19:28 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:04:10.532 16:19:28 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:10.532 16:19:28 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.532 16:19:28 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.532 16:19:28 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.532 16:19:28 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:10.532 16:19:28 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:04:10.532 16:19:28 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:04:10.532 16:19:28 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:04:10.532 00:04:10.532 real 0m0.225s 00:04:10.532 user 0m0.155s 00:04:10.532 sys 0m0.036s 00:04:10.532 ************************************ 00:04:10.532 END TEST go_rpc 00:04:10.532 ************************************ 00:04:10.532 16:19:28 rpc.go_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.532 16:19:28 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.532 16:19:28 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:10.532 16:19:28 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:10.532 16:19:28 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:10.532 16:19:28 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.532 16:19:28 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.532 16:19:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.532 ************************************ 00:04:10.532 START TEST rpc_daemon_integrity 00:04:10.532 ************************************ 00:04:10.532 16:19:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:10.532 16:19:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:10.532 16:19:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.532 16:19:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.532 16:19:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.532 16:19:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:10.532 16:19:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:10.532 16:19:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:10.532 16:19:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:10.532 16:19:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.532 16:19:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.532 16:19:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.532 16:19:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:04:10.532 16:19:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:10.532 16:19:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.532 16:19:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.532 16:19:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.532 16:19:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:10.532 { 00:04:10.532 "aliases": [ 00:04:10.532 "6ca1c2b7-42f1-47b2-b2c1-e2b966a95abd" 00:04:10.532 ], 00:04:10.532 "assigned_rate_limits": { 00:04:10.532 "r_mbytes_per_sec": 0, 00:04:10.532 "rw_ios_per_sec": 0, 00:04:10.532 "rw_mbytes_per_sec": 0, 00:04:10.532 "w_mbytes_per_sec": 0 00:04:10.532 }, 00:04:10.532 "block_size": 512, 00:04:10.532 "claimed": false, 00:04:10.532 "driver_specific": {}, 00:04:10.532 "memory_domains": [ 00:04:10.532 { 00:04:10.532 "dma_device_id": "system", 00:04:10.532 "dma_device_type": 1 00:04:10.532 }, 00:04:10.532 { 00:04:10.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.532 "dma_device_type": 2 00:04:10.532 } 00:04:10.532 ], 00:04:10.532 "name": "Malloc3", 00:04:10.532 "num_blocks": 16384, 00:04:10.532 "product_name": "Malloc disk", 00:04:10.532 "supported_io_types": { 00:04:10.532 "abort": true, 00:04:10.533 "compare": false, 00:04:10.533 "compare_and_write": false, 00:04:10.533 "copy": true, 00:04:10.533 "flush": true, 00:04:10.533 "get_zone_info": false, 00:04:10.533 "nvme_admin": false, 00:04:10.533 "nvme_io": false, 00:04:10.533 "nvme_io_md": false, 00:04:10.533 "nvme_iov_md": false, 00:04:10.533 "read": true, 00:04:10.533 "reset": true, 00:04:10.533 "seek_data": false, 00:04:10.533 "seek_hole": false, 00:04:10.533 "unmap": true, 00:04:10.533 "write": true, 00:04:10.533 "write_zeroes": true, 00:04:10.533 "zcopy": true, 00:04:10.533 "zone_append": false, 00:04:10.533 "zone_management": false 00:04:10.533 }, 00:04:10.533 "uuid": "6ca1c2b7-42f1-47b2-b2c1-e2b966a95abd", 00:04:10.533 "zoned": false 00:04:10.533 } 00:04:10.533 ]' 00:04:10.533 16:19:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:10.790 16:19:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:10.790 16:19:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:04:10.790 16:19:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.790 16:19:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.790 [2024-07-21 16:19:28.777086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:10.790 [2024-07-21 16:19:28.777184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:10.790 [2024-07-21 16:19:28.777209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd16bd0 00:04:10.790 [2024-07-21 16:19:28.777218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:10.790 [2024-07-21 16:19:28.779027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:10.790 [2024-07-21 16:19:28.779077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:10.790 Passthru0 00:04:10.790 16:19:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.790 16:19:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:10.790 16:19:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.790 16:19:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.790 16:19:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.790 16:19:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:10.790 { 00:04:10.790 "aliases": [ 00:04:10.790 "6ca1c2b7-42f1-47b2-b2c1-e2b966a95abd" 00:04:10.790 ], 00:04:10.790 "assigned_rate_limits": { 00:04:10.790 "r_mbytes_per_sec": 0, 00:04:10.790 "rw_ios_per_sec": 0, 00:04:10.790 "rw_mbytes_per_sec": 0, 00:04:10.790 "w_mbytes_per_sec": 0 00:04:10.790 }, 00:04:10.790 "block_size": 512, 00:04:10.790 "claim_type": "exclusive_write", 00:04:10.790 "claimed": true, 00:04:10.790 "driver_specific": {}, 00:04:10.790 "memory_domains": [ 00:04:10.790 { 00:04:10.790 "dma_device_id": "system", 00:04:10.790 "dma_device_type": 1 00:04:10.790 }, 00:04:10.790 { 00:04:10.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.790 "dma_device_type": 2 00:04:10.790 } 00:04:10.790 ], 00:04:10.790 "name": "Malloc3", 00:04:10.790 "num_blocks": 16384, 00:04:10.790 "product_name": "Malloc disk", 00:04:10.790 "supported_io_types": { 00:04:10.790 "abort": true, 00:04:10.790 "compare": false, 00:04:10.790 "compare_and_write": false, 00:04:10.790 "copy": true, 00:04:10.790 "flush": true, 00:04:10.790 "get_zone_info": false, 00:04:10.790 "nvme_admin": false, 00:04:10.790 "nvme_io": false, 00:04:10.790 "nvme_io_md": false, 00:04:10.790 "nvme_iov_md": false, 00:04:10.790 "read": true, 00:04:10.790 "reset": true, 00:04:10.790 "seek_data": false, 00:04:10.790 "seek_hole": false, 00:04:10.790 "unmap": true, 00:04:10.790 "write": true, 00:04:10.790 "write_zeroes": true, 00:04:10.790 "zcopy": true, 00:04:10.790 "zone_append": false, 00:04:10.790 "zone_management": false 00:04:10.790 }, 00:04:10.790 "uuid": "6ca1c2b7-42f1-47b2-b2c1-e2b966a95abd", 00:04:10.790 "zoned": false 00:04:10.790 }, 00:04:10.790 { 00:04:10.790 "aliases": [ 00:04:10.790 "81c16688-d029-573f-855e-9882e574ea18" 00:04:10.790 ], 00:04:10.790 "assigned_rate_limits": { 00:04:10.790 "r_mbytes_per_sec": 0, 00:04:10.790 "rw_ios_per_sec": 0, 00:04:10.790 "rw_mbytes_per_sec": 0, 00:04:10.790 "w_mbytes_per_sec": 0 00:04:10.790 }, 00:04:10.790 "block_size": 512, 00:04:10.790 "claimed": false, 00:04:10.790 "driver_specific": { 00:04:10.790 "passthru": { 00:04:10.790 "base_bdev_name": "Malloc3", 00:04:10.790 "name": "Passthru0" 00:04:10.790 } 00:04:10.790 }, 00:04:10.790 "memory_domains": [ 00:04:10.790 { 00:04:10.790 "dma_device_id": "system", 00:04:10.790 "dma_device_type": 1 00:04:10.790 }, 00:04:10.790 { 00:04:10.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.790 "dma_device_type": 2 00:04:10.790 } 00:04:10.790 ], 00:04:10.790 "name": "Passthru0", 00:04:10.790 "num_blocks": 16384, 00:04:10.790 "product_name": "passthru", 00:04:10.790 "supported_io_types": { 00:04:10.790 "abort": true, 00:04:10.790 "compare": false, 00:04:10.790 "compare_and_write": false, 00:04:10.790 "copy": true, 00:04:10.790 "flush": true, 00:04:10.790 "get_zone_info": false, 00:04:10.790 "nvme_admin": false, 00:04:10.790 "nvme_io": false, 00:04:10.790 "nvme_io_md": false, 00:04:10.790 "nvme_iov_md": false, 00:04:10.790 "read": true, 00:04:10.790 "reset": true, 00:04:10.790 "seek_data": false, 00:04:10.790 "seek_hole": false, 00:04:10.790 "unmap": true, 00:04:10.790 "write": true, 00:04:10.790 "write_zeroes": true, 00:04:10.790 "zcopy": true, 00:04:10.790 "zone_append": false, 00:04:10.790 "zone_management": false 00:04:10.790 }, 00:04:10.790 "uuid": "81c16688-d029-573f-855e-9882e574ea18", 00:04:10.790 "zoned": false 00:04:10.790 } 00:04:10.790 ]' 00:04:10.790 16:19:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:10.790 16:19:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:10.790 16:19:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:10.790 16:19:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.790 16:19:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.790 16:19:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.790 16:19:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:04:10.790 16:19:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.790 16:19:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.791 16:19:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.791 16:19:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:10.791 16:19:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.791 16:19:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.791 16:19:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.791 16:19:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:10.791 16:19:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:10.791 16:19:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:10.791 00:04:10.791 real 0m0.328s 00:04:10.791 user 0m0.218s 00:04:10.791 sys 0m0.039s 00:04:10.791 16:19:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.791 16:19:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.791 ************************************ 00:04:10.791 END TEST rpc_daemon_integrity 00:04:10.791 ************************************ 00:04:10.791 16:19:28 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:10.791 16:19:28 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:10.791 16:19:28 rpc -- rpc/rpc.sh@84 -- # killprocess 60656 00:04:10.791 16:19:28 rpc -- common/autotest_common.sh@948 -- # '[' -z 60656 ']' 00:04:10.791 16:19:28 rpc -- common/autotest_common.sh@952 -- # kill -0 60656 00:04:10.791 16:19:28 rpc -- common/autotest_common.sh@953 -- # uname 00:04:10.791 16:19:28 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:10.791 16:19:28 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60656 00:04:11.048 16:19:29 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:11.048 16:19:29 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:11.048 killing process with pid 60656 00:04:11.048 16:19:29 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60656' 00:04:11.048 16:19:29 rpc -- common/autotest_common.sh@967 -- # kill 60656 00:04:11.048 16:19:29 rpc -- common/autotest_common.sh@972 -- # wait 60656 00:04:11.614 00:04:11.614 real 0m3.377s 00:04:11.614 user 0m4.306s 00:04:11.614 sys 0m0.880s 00:04:11.614 16:19:29 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.614 ************************************ 00:04:11.614 END TEST rpc 00:04:11.614 ************************************ 00:04:11.614 16:19:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.614 16:19:29 -- common/autotest_common.sh@1142 -- # return 0 00:04:11.614 16:19:29 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:11.614 16:19:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.614 16:19:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.614 16:19:29 -- common/autotest_common.sh@10 -- # set +x 00:04:11.614 ************************************ 00:04:11.614 START TEST skip_rpc 00:04:11.614 ************************************ 00:04:11.614 16:19:29 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:11.614 * Looking for test storage... 00:04:11.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:11.614 16:19:29 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:11.614 16:19:29 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:11.614 16:19:29 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:11.614 16:19:29 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.614 16:19:29 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.614 16:19:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.614 ************************************ 00:04:11.614 START TEST skip_rpc 00:04:11.614 ************************************ 00:04:11.614 16:19:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:11.614 16:19:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60922 00:04:11.614 16:19:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.614 16:19:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:11.614 16:19:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:11.872 [2024-07-21 16:19:29.825952] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:04:11.872 [2024-07-21 16:19:29.826064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60922 ] 00:04:11.872 [2024-07-21 16:19:29.965763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.130 [2024-07-21 16:19:30.106191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.393 16:19:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:17.393 16:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:17.393 16:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:17.393 16:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:17.393 16:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:17.393 16:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:17.393 16:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:17.393 16:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:17.393 16:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.393 16:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.393 2024/07/21 16:19:34 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:04:17.393 16:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:17.393 16:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:17.393 16:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:17.393 16:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:17.393 16:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:17.393 16:19:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:17.393 16:19:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 60922 00:04:17.393 16:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 60922 ']' 00:04:17.393 16:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 60922 00:04:17.393 16:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:17.393 16:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:17.393 16:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60922 00:04:17.393 16:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:17.393 killing process with pid 60922 00:04:17.393 16:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:17.393 16:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60922' 00:04:17.393 16:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 60922 00:04:17.393 16:19:34 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 60922 00:04:17.393 00:04:17.393 real 0m5.653s 00:04:17.393 user 0m5.168s 00:04:17.393 sys 0m0.386s 00:04:17.393 16:19:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.393 16:19:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.393 ************************************ 00:04:17.393 END TEST skip_rpc 00:04:17.393 ************************************ 00:04:17.393 16:19:35 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:17.393 16:19:35 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:17.393 16:19:35 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.393 16:19:35 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.393 16:19:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.393 ************************************ 00:04:17.393 START TEST skip_rpc_with_json 00:04:17.393 ************************************ 00:04:17.393 16:19:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:17.393 16:19:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:17.393 16:19:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=61015 00:04:17.393 16:19:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:17.393 16:19:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:17.393 16:19:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 61015 00:04:17.393 16:19:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 61015 ']' 00:04:17.393 16:19:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.393 16:19:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:17.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.393 16:19:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.393 16:19:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:17.393 16:19:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:17.393 [2024-07-21 16:19:35.532812] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:04:17.393 [2024-07-21 16:19:35.532914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61015 ] 00:04:17.653 [2024-07-21 16:19:35.673874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.653 [2024-07-21 16:19:35.829721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.589 16:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:18.589 16:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:18.589 16:19:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:18.589 16:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.589 16:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:18.589 [2024-07-21 16:19:36.527421] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:18.589 2024/07/21 16:19:36 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:04:18.589 request: 00:04:18.589 { 00:04:18.589 "method": "nvmf_get_transports", 00:04:18.589 "params": { 00:04:18.589 "trtype": "tcp" 00:04:18.589 } 00:04:18.589 } 00:04:18.589 Got JSON-RPC error response 00:04:18.589 GoRPCClient: error on JSON-RPC call 00:04:18.589 16:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:18.589 16:19:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:18.589 16:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.589 16:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:18.589 [2024-07-21 16:19:36.539493] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:18.589 16:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.589 16:19:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:18.589 16:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.589 16:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:18.589 16:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:18.589 16:19:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:18.589 { 00:04:18.589 "subsystems": [ 00:04:18.589 { 00:04:18.589 "subsystem": "keyring", 00:04:18.589 "config": [] 00:04:18.589 }, 00:04:18.589 { 00:04:18.589 "subsystem": "iobuf", 00:04:18.589 "config": [ 00:04:18.589 { 00:04:18.589 "method": "iobuf_set_options", 00:04:18.589 "params": { 00:04:18.589 "large_bufsize": 135168, 00:04:18.589 "large_pool_count": 1024, 00:04:18.589 "small_bufsize": 8192, 00:04:18.589 "small_pool_count": 8192 00:04:18.589 } 00:04:18.589 } 00:04:18.589 ] 00:04:18.589 }, 00:04:18.589 { 00:04:18.589 "subsystem": "sock", 00:04:18.589 "config": [ 00:04:18.589 { 00:04:18.589 "method": "sock_set_default_impl", 00:04:18.589 "params": { 00:04:18.589 "impl_name": "posix" 00:04:18.589 } 00:04:18.589 }, 00:04:18.589 { 00:04:18.589 "method": "sock_impl_set_options", 00:04:18.589 "params": { 00:04:18.589 "enable_ktls": false, 00:04:18.589 "enable_placement_id": 0, 00:04:18.589 "enable_quickack": false, 00:04:18.589 "enable_recv_pipe": true, 00:04:18.589 "enable_zerocopy_send_client": false, 00:04:18.589 "enable_zerocopy_send_server": true, 00:04:18.589 "impl_name": "ssl", 00:04:18.589 "recv_buf_size": 4096, 00:04:18.589 "send_buf_size": 4096, 00:04:18.589 "tls_version": 0, 00:04:18.590 "zerocopy_threshold": 0 00:04:18.590 } 00:04:18.590 }, 00:04:18.590 { 00:04:18.590 "method": "sock_impl_set_options", 00:04:18.590 "params": { 00:04:18.590 "enable_ktls": false, 00:04:18.590 "enable_placement_id": 0, 00:04:18.590 "enable_quickack": false, 00:04:18.590 "enable_recv_pipe": true, 00:04:18.590 "enable_zerocopy_send_client": false, 00:04:18.590 "enable_zerocopy_send_server": true, 00:04:18.590 "impl_name": "posix", 00:04:18.590 "recv_buf_size": 2097152, 00:04:18.590 "send_buf_size": 2097152, 00:04:18.590 "tls_version": 0, 00:04:18.590 "zerocopy_threshold": 0 00:04:18.590 } 00:04:18.590 } 00:04:18.590 ] 00:04:18.590 }, 00:04:18.590 { 00:04:18.590 "subsystem": "vmd", 00:04:18.590 "config": [] 00:04:18.590 }, 00:04:18.590 { 00:04:18.590 "subsystem": "accel", 00:04:18.590 "config": [ 00:04:18.590 { 00:04:18.590 "method": "accel_set_options", 00:04:18.590 "params": { 00:04:18.590 "buf_count": 2048, 00:04:18.590 "large_cache_size": 16, 00:04:18.590 "sequence_count": 2048, 00:04:18.590 "small_cache_size": 128, 00:04:18.590 "task_count": 2048 00:04:18.590 } 00:04:18.590 } 00:04:18.590 ] 00:04:18.590 }, 00:04:18.590 { 00:04:18.590 "subsystem": "bdev", 00:04:18.590 "config": [ 00:04:18.590 { 00:04:18.590 "method": "bdev_set_options", 00:04:18.590 "params": { 00:04:18.590 "bdev_auto_examine": true, 00:04:18.590 "bdev_io_cache_size": 256, 00:04:18.590 "bdev_io_pool_size": 65535, 00:04:18.590 "iobuf_large_cache_size": 16, 00:04:18.590 "iobuf_small_cache_size": 128 00:04:18.590 } 00:04:18.590 }, 00:04:18.590 { 00:04:18.590 "method": "bdev_raid_set_options", 00:04:18.590 "params": { 00:04:18.590 "process_max_bandwidth_mb_sec": 0, 00:04:18.590 "process_window_size_kb": 1024 00:04:18.590 } 00:04:18.590 }, 00:04:18.590 { 00:04:18.590 "method": "bdev_iscsi_set_options", 00:04:18.590 "params": { 00:04:18.590 "timeout_sec": 30 00:04:18.590 } 00:04:18.590 }, 00:04:18.590 { 00:04:18.590 "method": "bdev_nvme_set_options", 00:04:18.590 "params": { 00:04:18.590 "action_on_timeout": "none", 00:04:18.590 "allow_accel_sequence": false, 00:04:18.590 "arbitration_burst": 0, 00:04:18.590 "bdev_retry_count": 3, 00:04:18.590 "ctrlr_loss_timeout_sec": 0, 00:04:18.590 "delay_cmd_submit": true, 00:04:18.590 "dhchap_dhgroups": [ 00:04:18.590 "null", 00:04:18.590 "ffdhe2048", 00:04:18.590 "ffdhe3072", 00:04:18.590 "ffdhe4096", 00:04:18.590 "ffdhe6144", 00:04:18.590 "ffdhe8192" 00:04:18.590 ], 00:04:18.590 "dhchap_digests": [ 00:04:18.590 "sha256", 00:04:18.590 "sha384", 00:04:18.590 "sha512" 00:04:18.590 ], 00:04:18.590 "disable_auto_failback": false, 00:04:18.590 "fast_io_fail_timeout_sec": 0, 00:04:18.590 "generate_uuids": false, 00:04:18.590 "high_priority_weight": 0, 00:04:18.590 "io_path_stat": false, 00:04:18.590 "io_queue_requests": 0, 00:04:18.590 "keep_alive_timeout_ms": 10000, 00:04:18.590 "low_priority_weight": 0, 00:04:18.590 "medium_priority_weight": 0, 00:04:18.590 "nvme_adminq_poll_period_us": 10000, 00:04:18.590 "nvme_error_stat": false, 00:04:18.590 "nvme_ioq_poll_period_us": 0, 00:04:18.590 "rdma_cm_event_timeout_ms": 0, 00:04:18.590 "rdma_max_cq_size": 0, 00:04:18.590 "rdma_srq_size": 0, 00:04:18.590 "reconnect_delay_sec": 0, 00:04:18.590 "timeout_admin_us": 0, 00:04:18.590 "timeout_us": 0, 00:04:18.590 "transport_ack_timeout": 0, 00:04:18.590 "transport_retry_count": 4, 00:04:18.590 "transport_tos": 0 00:04:18.590 } 00:04:18.590 }, 00:04:18.590 { 00:04:18.590 "method": "bdev_nvme_set_hotplug", 00:04:18.590 "params": { 00:04:18.590 "enable": false, 00:04:18.590 "period_us": 100000 00:04:18.590 } 00:04:18.590 }, 00:04:18.590 { 00:04:18.590 "method": "bdev_wait_for_examine" 00:04:18.590 } 00:04:18.590 ] 00:04:18.590 }, 00:04:18.590 { 00:04:18.590 "subsystem": "scsi", 00:04:18.590 "config": null 00:04:18.590 }, 00:04:18.590 { 00:04:18.590 "subsystem": "scheduler", 00:04:18.590 "config": [ 00:04:18.590 { 00:04:18.590 "method": "framework_set_scheduler", 00:04:18.590 "params": { 00:04:18.590 "name": "static" 00:04:18.590 } 00:04:18.590 } 00:04:18.590 ] 00:04:18.590 }, 00:04:18.590 { 00:04:18.590 "subsystem": "vhost_scsi", 00:04:18.590 "config": [] 00:04:18.590 }, 00:04:18.590 { 00:04:18.590 "subsystem": "vhost_blk", 00:04:18.590 "config": [] 00:04:18.590 }, 00:04:18.590 { 00:04:18.590 "subsystem": "ublk", 00:04:18.590 "config": [] 00:04:18.590 }, 00:04:18.590 { 00:04:18.590 "subsystem": "nbd", 00:04:18.590 "config": [] 00:04:18.590 }, 00:04:18.590 { 00:04:18.590 "subsystem": "nvmf", 00:04:18.590 "config": [ 00:04:18.590 { 00:04:18.590 "method": "nvmf_set_config", 00:04:18.590 "params": { 00:04:18.590 "admin_cmd_passthru": { 00:04:18.590 "identify_ctrlr": false 00:04:18.590 }, 00:04:18.590 "discovery_filter": "match_any" 00:04:18.590 } 00:04:18.590 }, 00:04:18.590 { 00:04:18.590 "method": "nvmf_set_max_subsystems", 00:04:18.590 "params": { 00:04:18.590 "max_subsystems": 1024 00:04:18.590 } 00:04:18.590 }, 00:04:18.590 { 00:04:18.590 "method": "nvmf_set_crdt", 00:04:18.590 "params": { 00:04:18.590 "crdt1": 0, 00:04:18.590 "crdt2": 0, 00:04:18.590 "crdt3": 0 00:04:18.590 } 00:04:18.590 }, 00:04:18.590 { 00:04:18.590 "method": "nvmf_create_transport", 00:04:18.590 "params": { 00:04:18.590 "abort_timeout_sec": 1, 00:04:18.590 "ack_timeout": 0, 00:04:18.590 "buf_cache_size": 4294967295, 00:04:18.590 "c2h_success": true, 00:04:18.590 "data_wr_pool_size": 0, 00:04:18.590 "dif_insert_or_strip": false, 00:04:18.590 "in_capsule_data_size": 4096, 00:04:18.590 "io_unit_size": 131072, 00:04:18.590 "max_aq_depth": 128, 00:04:18.590 "max_io_qpairs_per_ctrlr": 127, 00:04:18.590 "max_io_size": 131072, 00:04:18.590 "max_queue_depth": 128, 00:04:18.590 "num_shared_buffers": 511, 00:04:18.590 "sock_priority": 0, 00:04:18.590 "trtype": "TCP", 00:04:18.590 "zcopy": false 00:04:18.590 } 00:04:18.590 } 00:04:18.590 ] 00:04:18.590 }, 00:04:18.590 { 00:04:18.590 "subsystem": "iscsi", 00:04:18.590 "config": [ 00:04:18.590 { 00:04:18.590 "method": "iscsi_set_options", 00:04:18.590 "params": { 00:04:18.590 "allow_duplicated_isid": false, 00:04:18.590 "chap_group": 0, 00:04:18.590 "data_out_pool_size": 2048, 00:04:18.590 "default_time2retain": 20, 00:04:18.590 "default_time2wait": 2, 00:04:18.590 "disable_chap": false, 00:04:18.590 "error_recovery_level": 0, 00:04:18.590 "first_burst_length": 8192, 00:04:18.590 "immediate_data": true, 00:04:18.590 "immediate_data_pool_size": 16384, 00:04:18.590 "max_connections_per_session": 2, 00:04:18.590 "max_large_datain_per_connection": 64, 00:04:18.590 "max_queue_depth": 64, 00:04:18.590 "max_r2t_per_connection": 4, 00:04:18.590 "max_sessions": 128, 00:04:18.590 "mutual_chap": false, 00:04:18.590 "node_base": "iqn.2016-06.io.spdk", 00:04:18.590 "nop_in_interval": 30, 00:04:18.590 "nop_timeout": 60, 00:04:18.590 "pdu_pool_size": 36864, 00:04:18.590 "require_chap": false 00:04:18.590 } 00:04:18.590 } 00:04:18.590 ] 00:04:18.590 } 00:04:18.590 ] 00:04:18.590 } 00:04:18.590 16:19:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:18.590 16:19:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 61015 00:04:18.590 16:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 61015 ']' 00:04:18.590 16:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 61015 00:04:18.590 16:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:18.590 16:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:18.590 16:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61015 00:04:18.590 16:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:18.590 16:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:18.590 killing process with pid 61015 00:04:18.590 16:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61015' 00:04:18.590 16:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 61015 00:04:18.590 16:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 61015 00:04:19.527 16:19:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=61060 00:04:19.527 16:19:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:19.527 16:19:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:24.793 16:19:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 61060 00:04:24.793 16:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 61060 ']' 00:04:24.793 16:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 61060 00:04:24.793 16:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:24.793 16:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:24.793 16:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61060 00:04:24.793 16:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:24.793 16:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:24.793 killing process with pid 61060 00:04:24.793 16:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61060' 00:04:24.793 16:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 61060 00:04:24.793 16:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 61060 00:04:25.050 16:19:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:25.050 16:19:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:25.050 00:04:25.050 real 0m7.553s 00:04:25.050 user 0m7.053s 00:04:25.050 sys 0m0.901s 00:04:25.050 16:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.050 16:19:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:25.050 ************************************ 00:04:25.050 END TEST skip_rpc_with_json 00:04:25.050 ************************************ 00:04:25.050 16:19:43 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:25.050 16:19:43 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:25.050 16:19:43 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.050 16:19:43 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.050 16:19:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.050 ************************************ 00:04:25.050 START TEST skip_rpc_with_delay 00:04:25.050 ************************************ 00:04:25.050 16:19:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:25.050 16:19:43 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:25.050 16:19:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:25.050 16:19:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:25.050 16:19:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:25.050 16:19:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:25.050 16:19:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:25.050 16:19:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:25.050 16:19:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:25.050 16:19:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:25.050 16:19:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:25.050 16:19:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:25.050 16:19:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:25.050 [2024-07-21 16:19:43.141480] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:25.050 [2024-07-21 16:19:43.141609] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:25.050 16:19:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:25.050 16:19:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:25.050 16:19:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:25.050 16:19:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:25.050 00:04:25.050 real 0m0.092s 00:04:25.050 user 0m0.054s 00:04:25.050 sys 0m0.037s 00:04:25.050 16:19:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.050 ************************************ 00:04:25.050 END TEST skip_rpc_with_delay 00:04:25.050 ************************************ 00:04:25.050 16:19:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:25.050 16:19:43 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:25.050 16:19:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:25.050 16:19:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:25.050 16:19:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:25.050 16:19:43 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.050 16:19:43 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.050 16:19:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.050 ************************************ 00:04:25.050 START TEST exit_on_failed_rpc_init 00:04:25.050 ************************************ 00:04:25.050 16:19:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:25.050 16:19:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=61174 00:04:25.050 16:19:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 61174 00:04:25.050 16:19:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:25.050 16:19:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 61174 ']' 00:04:25.050 16:19:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.050 16:19:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:25.050 16:19:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.050 16:19:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:25.050 16:19:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:25.308 [2024-07-21 16:19:43.272570] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:04:25.308 [2024-07-21 16:19:43.272670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61174 ] 00:04:25.308 [2024-07-21 16:19:43.406118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.565 [2024-07-21 16:19:43.557335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.128 16:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:26.128 16:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:26.128 16:19:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.128 16:19:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:26.128 16:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:26.128 16:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:26.128 16:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.128 16:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:26.128 16:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.128 16:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:26.128 16:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.128 16:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:26.128 16:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.128 16:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:26.128 16:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:26.128 [2024-07-21 16:19:44.334439] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:04:26.128 [2024-07-21 16:19:44.334557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61205 ] 00:04:26.385 [2024-07-21 16:19:44.474430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.385 [2024-07-21 16:19:44.567342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.385 [2024-07-21 16:19:44.567502] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:26.385 [2024-07-21 16:19:44.567522] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:26.385 [2024-07-21 16:19:44.567533] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:26.642 16:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:26.642 16:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:26.642 16:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:26.642 16:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:26.642 16:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:26.642 16:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:26.642 16:19:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:26.642 16:19:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 61174 00:04:26.642 16:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 61174 ']' 00:04:26.642 16:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 61174 00:04:26.642 16:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:26.642 16:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:26.642 16:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61174 00:04:26.642 16:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:26.642 killing process with pid 61174 00:04:26.642 16:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:26.642 16:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61174' 00:04:26.642 16:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 61174 00:04:26.642 16:19:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 61174 00:04:27.207 00:04:27.207 real 0m2.114s 00:04:27.207 user 0m2.320s 00:04:27.207 sys 0m0.522s 00:04:27.207 16:19:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.207 16:19:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:27.207 ************************************ 00:04:27.207 END TEST exit_on_failed_rpc_init 00:04:27.207 ************************************ 00:04:27.207 16:19:45 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:27.207 16:19:45 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:27.207 00:04:27.207 real 0m15.717s 00:04:27.207 user 0m14.697s 00:04:27.207 sys 0m2.036s 00:04:27.207 16:19:45 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.207 16:19:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.207 ************************************ 00:04:27.207 END TEST skip_rpc 00:04:27.207 ************************************ 00:04:27.464 16:19:45 -- common/autotest_common.sh@1142 -- # return 0 00:04:27.464 16:19:45 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:27.464 16:19:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.464 16:19:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.464 16:19:45 -- common/autotest_common.sh@10 -- # set +x 00:04:27.464 ************************************ 00:04:27.464 START TEST rpc_client 00:04:27.464 ************************************ 00:04:27.464 16:19:45 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:27.464 * Looking for test storage... 00:04:27.464 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:27.464 16:19:45 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:27.464 OK 00:04:27.464 16:19:45 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:27.464 00:04:27.464 real 0m0.103s 00:04:27.464 user 0m0.049s 00:04:27.464 sys 0m0.058s 00:04:27.464 16:19:45 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.464 16:19:45 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:27.464 ************************************ 00:04:27.464 END TEST rpc_client 00:04:27.464 ************************************ 00:04:27.464 16:19:45 -- common/autotest_common.sh@1142 -- # return 0 00:04:27.464 16:19:45 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:27.464 16:19:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.464 16:19:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.464 16:19:45 -- common/autotest_common.sh@10 -- # set +x 00:04:27.464 ************************************ 00:04:27.464 START TEST json_config 00:04:27.464 ************************************ 00:04:27.464 16:19:45 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:27.464 16:19:45 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:27.464 16:19:45 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:27.464 16:19:45 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:27.464 16:19:45 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:27.464 16:19:45 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:27.464 16:19:45 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:27.464 16:19:45 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:27.464 16:19:45 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:27.464 16:19:45 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:27.464 16:19:45 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:27.464 16:19:45 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:27.464 16:19:45 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:27.464 16:19:45 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:04:27.464 16:19:45 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:04:27.464 16:19:45 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:27.464 16:19:45 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:27.464 16:19:45 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:27.464 16:19:45 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:27.464 16:19:45 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:27.464 16:19:45 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:27.464 16:19:45 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:27.464 16:19:45 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:27.464 16:19:45 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.464 16:19:45 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.464 16:19:45 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.464 16:19:45 json_config -- paths/export.sh@5 -- # export PATH 00:04:27.464 16:19:45 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.464 16:19:45 json_config -- nvmf/common.sh@47 -- # : 0 00:04:27.464 16:19:45 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:27.464 16:19:45 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:27.464 16:19:45 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:27.464 16:19:45 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:27.464 16:19:45 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:27.464 16:19:45 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:27.464 16:19:45 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:27.464 16:19:45 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:27.464 16:19:45 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:27.464 16:19:45 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:27.464 16:19:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:27.464 16:19:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:27.464 16:19:45 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:27.464 16:19:45 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:27.464 16:19:45 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:27.464 16:19:45 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:27.464 16:19:45 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:27.464 16:19:45 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:27.464 16:19:45 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:27.464 16:19:45 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:27.464 16:19:45 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:27.464 16:19:45 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:27.464 16:19:45 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:27.464 INFO: JSON configuration test init 00:04:27.464 16:19:45 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:04:27.465 16:19:45 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:04:27.465 16:19:45 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:04:27.465 16:19:45 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:27.465 16:19:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.465 16:19:45 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:04:27.465 16:19:45 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:27.465 16:19:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.465 16:19:45 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:04:27.465 16:19:45 json_config -- json_config/common.sh@9 -- # local app=target 00:04:27.465 16:19:45 json_config -- json_config/common.sh@10 -- # shift 00:04:27.722 16:19:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:27.722 16:19:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:27.722 16:19:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:27.722 16:19:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.722 16:19:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.722 16:19:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61323 00:04:27.722 Waiting for target to run... 00:04:27.722 16:19:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:27.722 16:19:45 json_config -- json_config/common.sh@25 -- # waitforlisten 61323 /var/tmp/spdk_tgt.sock 00:04:27.722 16:19:45 json_config -- common/autotest_common.sh@829 -- # '[' -z 61323 ']' 00:04:27.722 16:19:45 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:27.722 16:19:45 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:27.722 16:19:45 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:27.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:27.722 16:19:45 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:27.722 16:19:45 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:27.722 16:19:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.722 [2024-07-21 16:19:45.738769] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:04:27.722 [2024-07-21 16:19:45.738874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61323 ] 00:04:27.979 [2024-07-21 16:19:46.161977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.237 [2024-07-21 16:19:46.283767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.802 16:19:46 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:28.802 00:04:28.802 16:19:46 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:28.802 16:19:46 json_config -- json_config/common.sh@26 -- # echo '' 00:04:28.802 16:19:46 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:04:28.802 16:19:46 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:04:28.802 16:19:46 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:28.802 16:19:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.802 16:19:46 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:04:28.802 16:19:46 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:04:28.802 16:19:46 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:28.802 16:19:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.802 16:19:46 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:28.802 16:19:46 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:04:28.802 16:19:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:29.368 16:19:47 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:04:29.368 16:19:47 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:29.368 16:19:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:29.368 16:19:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.368 16:19:47 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:29.368 16:19:47 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:29.368 16:19:47 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:29.368 16:19:47 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:29.368 16:19:47 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:29.368 16:19:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:29.368 16:19:47 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:29.368 16:19:47 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:29.368 16:19:47 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:04:29.368 16:19:47 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:04:29.368 16:19:47 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:04:29.368 16:19:47 json_config -- json_config/json_config.sh@51 -- # sort 00:04:29.368 16:19:47 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:04:29.368 16:19:47 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:04:29.368 16:19:47 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:04:29.368 16:19:47 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:04:29.368 16:19:47 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:29.368 16:19:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.368 16:19:47 json_config -- json_config/json_config.sh@59 -- # return 0 00:04:29.368 16:19:47 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:29.368 16:19:47 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:29.368 16:19:47 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:04:29.368 16:19:47 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:04:29.368 16:19:47 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:04:29.368 16:19:47 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:04:29.368 16:19:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:29.368 16:19:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.626 16:19:47 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:29.626 16:19:47 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:04:29.626 16:19:47 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:04:29.626 16:19:47 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:29.626 16:19:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:29.626 MallocForNvmf0 00:04:29.626 16:19:47 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:29.626 16:19:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:29.884 MallocForNvmf1 00:04:29.884 16:19:48 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:29.884 16:19:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:30.141 [2024-07-21 16:19:48.254500] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:30.141 16:19:48 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:30.141 16:19:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:30.398 16:19:48 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:30.398 16:19:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:30.654 16:19:48 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:30.654 16:19:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:30.911 16:19:48 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:30.911 16:19:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:31.168 [2024-07-21 16:19:49.203239] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:31.168 16:19:49 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:04:31.168 16:19:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:31.168 16:19:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.168 16:19:49 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:04:31.168 16:19:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:31.168 16:19:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.168 16:19:49 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:04:31.168 16:19:49 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:31.168 16:19:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:31.426 MallocBdevForConfigChangeCheck 00:04:31.426 16:19:49 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:04:31.426 16:19:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:31.426 16:19:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.683 16:19:49 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:04:31.683 16:19:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:31.941 INFO: shutting down applications... 00:04:31.941 16:19:50 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:04:31.941 16:19:50 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:04:31.941 16:19:50 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:04:31.941 16:19:50 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:04:31.941 16:19:50 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:32.198 Calling clear_iscsi_subsystem 00:04:32.198 Calling clear_nvmf_subsystem 00:04:32.198 Calling clear_nbd_subsystem 00:04:32.198 Calling clear_ublk_subsystem 00:04:32.198 Calling clear_vhost_blk_subsystem 00:04:32.198 Calling clear_vhost_scsi_subsystem 00:04:32.198 Calling clear_bdev_subsystem 00:04:32.198 16:19:50 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:32.198 16:19:50 json_config -- json_config/json_config.sh@347 -- # count=100 00:04:32.198 16:19:50 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:04:32.198 16:19:50 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:32.199 16:19:50 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:32.199 16:19:50 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:32.764 16:19:50 json_config -- json_config/json_config.sh@349 -- # break 00:04:32.764 16:19:50 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:04:32.764 16:19:50 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:04:32.764 16:19:50 json_config -- json_config/common.sh@31 -- # local app=target 00:04:32.764 16:19:50 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:32.764 16:19:50 json_config -- json_config/common.sh@35 -- # [[ -n 61323 ]] 00:04:32.764 16:19:50 json_config -- json_config/common.sh@38 -- # kill -SIGINT 61323 00:04:32.764 16:19:50 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:32.764 16:19:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:32.764 16:19:50 json_config -- json_config/common.sh@41 -- # kill -0 61323 00:04:32.764 16:19:50 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:33.328 16:19:51 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:33.328 16:19:51 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.328 16:19:51 json_config -- json_config/common.sh@41 -- # kill -0 61323 00:04:33.328 16:19:51 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:33.328 16:19:51 json_config -- json_config/common.sh@43 -- # break 00:04:33.328 16:19:51 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:33.328 SPDK target shutdown done 00:04:33.328 16:19:51 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:33.328 INFO: relaunching applications... 00:04:33.328 16:19:51 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:04:33.328 16:19:51 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:33.328 16:19:51 json_config -- json_config/common.sh@9 -- # local app=target 00:04:33.328 16:19:51 json_config -- json_config/common.sh@10 -- # shift 00:04:33.328 16:19:51 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:33.328 16:19:51 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:33.328 16:19:51 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:33.328 16:19:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.328 16:19:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.328 16:19:51 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61597 00:04:33.328 16:19:51 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:33.328 Waiting for target to run... 00:04:33.328 16:19:51 json_config -- json_config/common.sh@25 -- # waitforlisten 61597 /var/tmp/spdk_tgt.sock 00:04:33.328 16:19:51 json_config -- common/autotest_common.sh@829 -- # '[' -z 61597 ']' 00:04:33.328 16:19:51 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:33.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:33.328 16:19:51 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:33.328 16:19:51 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:33.328 16:19:51 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:33.328 16:19:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.328 16:19:51 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:33.328 [2024-07-21 16:19:51.376419] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:04:33.328 [2024-07-21 16:19:51.376525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61597 ] 00:04:33.890 [2024-07-21 16:19:51.812696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.890 [2024-07-21 16:19:51.931153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.146 [2024-07-21 16:19:52.271723] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:34.146 [2024-07-21 16:19:52.303878] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:34.402 16:19:52 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:34.402 16:19:52 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:34.402 00:04:34.402 16:19:52 json_config -- json_config/common.sh@26 -- # echo '' 00:04:34.402 16:19:52 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:04:34.402 INFO: Checking if target configuration is the same... 00:04:34.402 16:19:52 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:34.402 16:19:52 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:04:34.402 16:19:52 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:34.402 16:19:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:34.402 + '[' 2 -ne 2 ']' 00:04:34.402 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:34.402 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:34.402 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:34.402 +++ basename /dev/fd/62 00:04:34.402 ++ mktemp /tmp/62.XXX 00:04:34.402 + tmp_file_1=/tmp/62.wNC 00:04:34.402 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:34.402 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:34.402 + tmp_file_2=/tmp/spdk_tgt_config.json.cGe 00:04:34.402 + ret=0 00:04:34.402 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:34.659 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:34.659 + diff -u /tmp/62.wNC /tmp/spdk_tgt_config.json.cGe 00:04:34.659 INFO: JSON config files are the same 00:04:34.659 + echo 'INFO: JSON config files are the same' 00:04:34.659 + rm /tmp/62.wNC /tmp/spdk_tgt_config.json.cGe 00:04:34.659 + exit 0 00:04:34.659 16:19:52 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:04:34.660 INFO: changing configuration and checking if this can be detected... 00:04:34.660 16:19:52 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:34.660 16:19:52 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:34.660 16:19:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:35.228 16:19:53 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:35.228 16:19:53 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:04:35.228 16:19:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:35.228 + '[' 2 -ne 2 ']' 00:04:35.228 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:35.228 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:35.228 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:35.228 +++ basename /dev/fd/62 00:04:35.228 ++ mktemp /tmp/62.XXX 00:04:35.228 + tmp_file_1=/tmp/62.qtY 00:04:35.228 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:35.228 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:35.228 + tmp_file_2=/tmp/spdk_tgt_config.json.t5y 00:04:35.228 + ret=0 00:04:35.228 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:35.486 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:35.486 + diff -u /tmp/62.qtY /tmp/spdk_tgt_config.json.t5y 00:04:35.486 + ret=1 00:04:35.486 + echo '=== Start of file: /tmp/62.qtY ===' 00:04:35.486 + cat /tmp/62.qtY 00:04:35.486 + echo '=== End of file: /tmp/62.qtY ===' 00:04:35.486 + echo '' 00:04:35.486 + echo '=== Start of file: /tmp/spdk_tgt_config.json.t5y ===' 00:04:35.486 + cat /tmp/spdk_tgt_config.json.t5y 00:04:35.486 + echo '=== End of file: /tmp/spdk_tgt_config.json.t5y ===' 00:04:35.486 + echo '' 00:04:35.486 + rm /tmp/62.qtY /tmp/spdk_tgt_config.json.t5y 00:04:35.486 + exit 1 00:04:35.486 INFO: configuration change detected. 00:04:35.486 16:19:53 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:04:35.486 16:19:53 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:04:35.486 16:19:53 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:04:35.486 16:19:53 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:35.486 16:19:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.486 16:19:53 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:04:35.486 16:19:53 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:04:35.486 16:19:53 json_config -- json_config/json_config.sh@321 -- # [[ -n 61597 ]] 00:04:35.486 16:19:53 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:04:35.486 16:19:53 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:04:35.486 16:19:53 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:35.486 16:19:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.486 16:19:53 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:04:35.486 16:19:53 json_config -- json_config/json_config.sh@197 -- # uname -s 00:04:35.486 16:19:53 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:04:35.486 16:19:53 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:04:35.486 16:19:53 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:04:35.486 16:19:53 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:04:35.486 16:19:53 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:35.486 16:19:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.486 16:19:53 json_config -- json_config/json_config.sh@327 -- # killprocess 61597 00:04:35.486 16:19:53 json_config -- common/autotest_common.sh@948 -- # '[' -z 61597 ']' 00:04:35.486 16:19:53 json_config -- common/autotest_common.sh@952 -- # kill -0 61597 00:04:35.486 16:19:53 json_config -- common/autotest_common.sh@953 -- # uname 00:04:35.486 16:19:53 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:35.486 16:19:53 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61597 00:04:35.486 16:19:53 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:35.486 16:19:53 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:35.486 killing process with pid 61597 00:04:35.486 16:19:53 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61597' 00:04:35.486 16:19:53 json_config -- common/autotest_common.sh@967 -- # kill 61597 00:04:35.486 16:19:53 json_config -- common/autotest_common.sh@972 -- # wait 61597 00:04:36.051 16:19:54 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:36.051 16:19:54 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:04:36.051 16:19:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:36.051 16:19:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.051 16:19:54 json_config -- json_config/json_config.sh@332 -- # return 0 00:04:36.051 INFO: Success 00:04:36.051 16:19:54 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:04:36.051 00:04:36.051 real 0m8.541s 00:04:36.051 user 0m12.113s 00:04:36.051 sys 0m1.955s 00:04:36.051 16:19:54 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.051 ************************************ 00:04:36.051 END TEST json_config 00:04:36.051 16:19:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.051 ************************************ 00:04:36.051 16:19:54 -- common/autotest_common.sh@1142 -- # return 0 00:04:36.051 16:19:54 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:36.051 16:19:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.051 16:19:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.051 16:19:54 -- common/autotest_common.sh@10 -- # set +x 00:04:36.051 ************************************ 00:04:36.051 START TEST json_config_extra_key 00:04:36.051 ************************************ 00:04:36.051 16:19:54 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:36.051 16:19:54 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:36.051 16:19:54 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:36.051 16:19:54 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:36.051 16:19:54 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:36.051 16:19:54 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:36.051 16:19:54 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:36.051 16:19:54 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:36.051 16:19:54 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:36.051 16:19:54 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:36.051 16:19:54 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:36.051 16:19:54 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:36.051 16:19:54 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:36.051 16:19:54 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:04:36.051 16:19:54 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:04:36.051 16:19:54 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:36.051 16:19:54 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:36.051 16:19:54 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:36.051 16:19:54 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:36.051 16:19:54 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:36.051 16:19:54 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:36.051 16:19:54 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:36.051 16:19:54 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:36.051 16:19:54 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.051 16:19:54 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.051 16:19:54 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.051 16:19:54 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:36.051 16:19:54 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.051 16:19:54 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:36.051 16:19:54 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:36.051 16:19:54 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:36.051 16:19:54 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:36.051 16:19:54 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:36.051 16:19:54 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:36.051 16:19:54 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:36.051 16:19:54 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:36.051 16:19:54 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:36.051 16:19:54 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:36.051 16:19:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:36.051 16:19:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:36.051 16:19:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:36.051 16:19:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:36.051 16:19:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:36.051 16:19:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:36.051 16:19:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:36.051 16:19:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:36.051 16:19:54 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:36.051 INFO: launching applications... 00:04:36.051 16:19:54 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:36.051 16:19:54 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:36.052 16:19:54 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:36.052 16:19:54 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:36.052 16:19:54 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:36.052 16:19:54 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:36.052 16:19:54 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:36.052 16:19:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.052 16:19:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.052 16:19:54 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=61773 00:04:36.052 Waiting for target to run... 00:04:36.052 16:19:54 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:36.052 16:19:54 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 61773 /var/tmp/spdk_tgt.sock 00:04:36.052 16:19:54 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 61773 ']' 00:04:36.052 16:19:54 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:36.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:36.052 16:19:54 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:36.052 16:19:54 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:36.052 16:19:54 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:36.052 16:19:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:36.052 16:19:54 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:36.309 [2024-07-21 16:19:54.332352] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:04:36.309 [2024-07-21 16:19:54.332506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61773 ] 00:04:36.874 [2024-07-21 16:19:54.888719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.874 [2024-07-21 16:19:55.005602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.132 16:19:55 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:37.132 16:19:55 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:37.132 16:19:55 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:37.132 00:04:37.132 INFO: shutting down applications... 00:04:37.132 16:19:55 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:37.132 16:19:55 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:37.132 16:19:55 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:37.132 16:19:55 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:37.132 16:19:55 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 61773 ]] 00:04:37.132 16:19:55 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 61773 00:04:37.132 16:19:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:37.132 16:19:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.132 16:19:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61773 00:04:37.132 16:19:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:37.695 16:19:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:37.695 16:19:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.695 16:19:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61773 00:04:37.695 16:19:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:38.259 16:19:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:38.259 16:19:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:38.259 16:19:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61773 00:04:38.259 16:19:56 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:38.259 16:19:56 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:38.259 16:19:56 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:38.259 SPDK target shutdown done 00:04:38.259 16:19:56 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:38.259 Success 00:04:38.259 16:19:56 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:38.259 00:04:38.259 real 0m2.171s 00:04:38.259 user 0m1.687s 00:04:38.259 sys 0m0.582s 00:04:38.259 16:19:56 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.259 ************************************ 00:04:38.259 END TEST json_config_extra_key 00:04:38.259 ************************************ 00:04:38.259 16:19:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:38.259 16:19:56 -- common/autotest_common.sh@1142 -- # return 0 00:04:38.259 16:19:56 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:38.259 16:19:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.259 16:19:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.259 16:19:56 -- common/autotest_common.sh@10 -- # set +x 00:04:38.259 ************************************ 00:04:38.259 START TEST alias_rpc 00:04:38.259 ************************************ 00:04:38.259 16:19:56 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:38.517 * Looking for test storage... 00:04:38.517 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:38.517 16:19:56 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:38.517 16:19:56 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61857 00:04:38.517 16:19:56 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.517 16:19:56 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61857 00:04:38.517 16:19:56 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 61857 ']' 00:04:38.517 16:19:56 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.517 16:19:56 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.517 16:19:56 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.517 16:19:56 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.517 16:19:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.517 [2024-07-21 16:19:56.568405] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:04:38.517 [2024-07-21 16:19:56.568528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61857 ] 00:04:38.517 [2024-07-21 16:19:56.704509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.775 [2024-07-21 16:19:56.852601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.705 16:19:57 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:39.705 16:19:57 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:39.705 16:19:57 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:39.705 16:19:57 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61857 00:04:39.705 16:19:57 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 61857 ']' 00:04:39.705 16:19:57 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 61857 00:04:39.705 16:19:57 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:39.705 16:19:57 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:39.705 16:19:57 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61857 00:04:39.705 16:19:57 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:39.705 16:19:57 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:39.705 killing process with pid 61857 00:04:39.705 16:19:57 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61857' 00:04:39.705 16:19:57 alias_rpc -- common/autotest_common.sh@967 -- # kill 61857 00:04:39.705 16:19:57 alias_rpc -- common/autotest_common.sh@972 -- # wait 61857 00:04:40.636 00:04:40.636 real 0m2.070s 00:04:40.636 user 0m2.226s 00:04:40.636 sys 0m0.563s 00:04:40.636 16:19:58 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.636 ************************************ 00:04:40.636 END TEST alias_rpc 00:04:40.636 ************************************ 00:04:40.636 16:19:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.636 16:19:58 -- common/autotest_common.sh@1142 -- # return 0 00:04:40.636 16:19:58 -- spdk/autotest.sh@176 -- # [[ 1 -eq 0 ]] 00:04:40.636 16:19:58 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:40.636 16:19:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.636 16:19:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.636 16:19:58 -- common/autotest_common.sh@10 -- # set +x 00:04:40.636 ************************************ 00:04:40.636 START TEST dpdk_mem_utility 00:04:40.636 ************************************ 00:04:40.636 16:19:58 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:40.636 * Looking for test storage... 00:04:40.636 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:40.636 16:19:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:40.636 16:19:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61954 00:04:40.636 16:19:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.636 16:19:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61954 00:04:40.636 16:19:58 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 61954 ']' 00:04:40.636 16:19:58 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.636 16:19:58 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:40.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.636 16:19:58 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.636 16:19:58 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:40.636 16:19:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:40.636 [2024-07-21 16:19:58.683701] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:04:40.636 [2024-07-21 16:19:58.683833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61954 ] 00:04:40.636 [2024-07-21 16:19:58.819743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.894 [2024-07-21 16:19:58.979646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.487 16:19:59 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:41.487 16:19:59 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:41.487 16:19:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:41.487 16:19:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:41.487 16:19:59 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.487 16:19:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:41.487 { 00:04:41.487 "filename": "/tmp/spdk_mem_dump.txt" 00:04:41.487 } 00:04:41.487 16:19:59 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:41.487 16:19:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:41.771 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:41.771 1 heaps totaling size 814.000000 MiB 00:04:41.771 size: 814.000000 MiB heap id: 0 00:04:41.771 end heaps---------- 00:04:41.771 8 mempools totaling size 598.116089 MiB 00:04:41.771 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:41.771 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:41.771 size: 84.521057 MiB name: bdev_io_61954 00:04:41.771 size: 51.011292 MiB name: evtpool_61954 00:04:41.771 size: 50.003479 MiB name: msgpool_61954 00:04:41.771 size: 21.763794 MiB name: PDU_Pool 00:04:41.771 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:41.771 size: 0.026123 MiB name: Session_Pool 00:04:41.772 end mempools------- 00:04:41.772 6 memzones totaling size 4.142822 MiB 00:04:41.772 size: 1.000366 MiB name: RG_ring_0_61954 00:04:41.772 size: 1.000366 MiB name: RG_ring_1_61954 00:04:41.772 size: 1.000366 MiB name: RG_ring_4_61954 00:04:41.772 size: 1.000366 MiB name: RG_ring_5_61954 00:04:41.772 size: 0.125366 MiB name: RG_ring_2_61954 00:04:41.772 size: 0.015991 MiB name: RG_ring_3_61954 00:04:41.772 end memzones------- 00:04:41.772 16:19:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:41.772 heap id: 0 total size: 814.000000 MiB number of busy elements: 216 number of free elements: 15 00:04:41.772 list of free elements. size: 12.487305 MiB 00:04:41.772 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:41.772 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:41.772 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:41.772 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:41.772 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:41.772 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:41.772 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:41.772 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:41.772 element at address: 0x200000200000 with size: 0.837036 MiB 00:04:41.772 element at address: 0x20001aa00000 with size: 0.572266 MiB 00:04:41.772 element at address: 0x20000b200000 with size: 0.489807 MiB 00:04:41.772 element at address: 0x200000800000 with size: 0.487061 MiB 00:04:41.772 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:41.772 element at address: 0x200027e00000 with size: 0.398682 MiB 00:04:41.772 element at address: 0x200003a00000 with size: 0.351685 MiB 00:04:41.772 list of standard malloc elements. size: 199.250122 MiB 00:04:41.772 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:41.772 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:41.772 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:41.772 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:41.772 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:41.772 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:41.772 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:41.772 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:41.772 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:41.772 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:41.772 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:41.772 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:41.772 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:41.772 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:41.772 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:41.772 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:41.772 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:41.772 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:41.772 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:41.772 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:41.772 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:41.772 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:41.772 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:41.772 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:41.772 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:41.772 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:41.772 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:41.772 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:41.772 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:41.772 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:41.772 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:41.772 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:41.772 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:41.772 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:41.773 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:41.773 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:41.773 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:41.773 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:41.773 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:41.773 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:41.773 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:41.773 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:41.773 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:41.773 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e66100 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e661c0 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6cdc0 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:41.773 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:41.773 list of memzone associated elements. size: 602.262573 MiB 00:04:41.773 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:41.773 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:41.773 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:41.773 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:41.773 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:41.773 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61954_0 00:04:41.773 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:41.773 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61954_0 00:04:41.773 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:41.773 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61954_0 00:04:41.773 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:41.773 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:41.773 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:41.773 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:41.773 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:41.773 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61954 00:04:41.773 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:41.773 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61954 00:04:41.773 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:41.773 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61954 00:04:41.773 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:41.773 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:41.773 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:41.773 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:41.773 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:41.773 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:41.773 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:41.773 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:41.773 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:41.773 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61954 00:04:41.773 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:41.773 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61954 00:04:41.773 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:41.773 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61954 00:04:41.773 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:41.773 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61954 00:04:41.773 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:41.773 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61954 00:04:41.773 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:41.773 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:41.773 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:41.773 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:41.773 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:41.773 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:41.773 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:41.773 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61954 00:04:41.773 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:41.773 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:41.773 element at address: 0x200027e66280 with size: 0.023743 MiB 00:04:41.773 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:41.773 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:41.773 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61954 00:04:41.773 element at address: 0x200027e6c3c0 with size: 0.002441 MiB 00:04:41.773 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:41.773 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:04:41.773 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61954 00:04:41.773 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:41.773 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61954 00:04:41.773 element at address: 0x200027e6ce80 with size: 0.000305 MiB 00:04:41.773 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:41.773 16:19:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:41.773 16:19:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61954 00:04:41.773 16:19:59 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 61954 ']' 00:04:41.773 16:19:59 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 61954 00:04:41.773 16:19:59 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:41.773 16:19:59 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:41.773 16:19:59 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61954 00:04:41.773 16:19:59 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:41.773 16:19:59 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:41.773 killing process with pid 61954 00:04:41.773 16:19:59 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61954' 00:04:41.773 16:19:59 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 61954 00:04:41.773 16:19:59 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 61954 00:04:42.338 00:04:42.338 real 0m1.854s 00:04:42.338 user 0m1.863s 00:04:42.338 sys 0m0.542s 00:04:42.338 16:20:00 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.338 ************************************ 00:04:42.338 END TEST dpdk_mem_utility 00:04:42.338 16:20:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:42.338 ************************************ 00:04:42.338 16:20:00 -- common/autotest_common.sh@1142 -- # return 0 00:04:42.338 16:20:00 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:42.338 16:20:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.338 16:20:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.338 16:20:00 -- common/autotest_common.sh@10 -- # set +x 00:04:42.338 ************************************ 00:04:42.338 START TEST event 00:04:42.338 ************************************ 00:04:42.338 16:20:00 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:42.338 * Looking for test storage... 00:04:42.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:42.338 16:20:00 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:42.338 16:20:00 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:42.338 16:20:00 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:42.338 16:20:00 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:42.338 16:20:00 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.338 16:20:00 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.338 ************************************ 00:04:42.338 START TEST event_perf 00:04:42.338 ************************************ 00:04:42.338 16:20:00 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:42.595 Running I/O for 1 seconds...[2024-07-21 16:20:00.563726] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:04:42.595 [2024-07-21 16:20:00.563831] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62043 ] 00:04:42.595 [2024-07-21 16:20:00.703809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:42.852 [2024-07-21 16:20:00.853757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.852 [2024-07-21 16:20:00.853924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:42.852 [2024-07-21 16:20:00.854046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:42.852 [2024-07-21 16:20:00.854074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.784 Running I/O for 1 seconds... 00:04:43.784 lcore 0: 109516 00:04:43.784 lcore 1: 109518 00:04:43.784 lcore 2: 109519 00:04:43.784 lcore 3: 109517 00:04:43.784 done. 00:04:43.784 00:04:43.784 real 0m1.439s 00:04:43.784 user 0m4.219s 00:04:43.784 sys 0m0.076s 00:04:43.784 16:20:01 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.784 ************************************ 00:04:43.784 END TEST event_perf 00:04:43.784 ************************************ 00:04:43.784 16:20:01 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:44.040 16:20:02 event -- common/autotest_common.sh@1142 -- # return 0 00:04:44.040 16:20:02 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:44.040 16:20:02 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:44.041 16:20:02 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.041 16:20:02 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.041 ************************************ 00:04:44.041 START TEST event_reactor 00:04:44.041 ************************************ 00:04:44.041 16:20:02 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:44.041 [2024-07-21 16:20:02.055175] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:04:44.041 [2024-07-21 16:20:02.055340] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62087 ] 00:04:44.041 [2024-07-21 16:20:02.192781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.297 [2024-07-21 16:20:02.326806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.668 test_start 00:04:45.668 oneshot 00:04:45.668 tick 100 00:04:45.668 tick 100 00:04:45.668 tick 250 00:04:45.668 tick 100 00:04:45.668 tick 100 00:04:45.668 tick 100 00:04:45.668 tick 250 00:04:45.668 tick 500 00:04:45.668 tick 100 00:04:45.669 tick 100 00:04:45.669 tick 250 00:04:45.669 tick 100 00:04:45.669 tick 100 00:04:45.669 test_end 00:04:45.669 00:04:45.669 real 0m1.411s 00:04:45.669 user 0m1.227s 00:04:45.669 sys 0m0.076s 00:04:45.669 16:20:03 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.669 16:20:03 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:45.669 ************************************ 00:04:45.669 END TEST event_reactor 00:04:45.669 ************************************ 00:04:45.669 16:20:03 event -- common/autotest_common.sh@1142 -- # return 0 00:04:45.669 16:20:03 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:45.669 16:20:03 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:45.669 16:20:03 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.669 16:20:03 event -- common/autotest_common.sh@10 -- # set +x 00:04:45.669 ************************************ 00:04:45.669 START TEST event_reactor_perf 00:04:45.669 ************************************ 00:04:45.669 16:20:03 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:45.669 [2024-07-21 16:20:03.529544] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:04:45.669 [2024-07-21 16:20:03.529663] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62117 ] 00:04:45.669 [2024-07-21 16:20:03.667249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.669 [2024-07-21 16:20:03.823671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.037 test_start 00:04:47.037 test_end 00:04:47.037 Performance: 391330 events per second 00:04:47.037 00:04:47.037 real 0m1.440s 00:04:47.037 user 0m1.258s 00:04:47.037 sys 0m0.075s 00:04:47.037 16:20:04 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.037 16:20:04 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:47.037 ************************************ 00:04:47.037 END TEST event_reactor_perf 00:04:47.037 ************************************ 00:04:47.037 16:20:04 event -- common/autotest_common.sh@1142 -- # return 0 00:04:47.037 16:20:04 event -- event/event.sh@49 -- # uname -s 00:04:47.037 16:20:05 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:47.037 16:20:05 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:47.037 16:20:05 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.037 16:20:05 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.037 16:20:05 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.037 ************************************ 00:04:47.037 START TEST event_scheduler 00:04:47.037 ************************************ 00:04:47.037 16:20:05 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:47.037 * Looking for test storage... 00:04:47.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:47.037 16:20:05 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:47.037 16:20:05 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=62184 00:04:47.037 16:20:05 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.037 16:20:05 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 62184 00:04:47.037 16:20:05 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 62184 ']' 00:04:47.037 16:20:05 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.037 16:20:05 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:47.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.037 16:20:05 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:47.037 16:20:05 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.037 16:20:05 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:47.037 16:20:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:47.037 [2024-07-21 16:20:05.152638] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:04:47.037 [2024-07-21 16:20:05.152742] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62184 ] 00:04:47.294 [2024-07-21 16:20:05.291520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:47.294 [2024-07-21 16:20:05.411028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.294 [2024-07-21 16:20:05.411172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.294 [2024-07-21 16:20:05.411311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:47.294 [2024-07-21 16:20:05.411301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:48.225 16:20:06 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.225 16:20:06 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:48.225 16:20:06 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:48.225 16:20:06 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.225 16:20:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.225 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:48.225 POWER: Cannot set governor of lcore 0 to userspace 00:04:48.225 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:48.225 POWER: Cannot set governor of lcore 0 to performance 00:04:48.225 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:48.225 POWER: Cannot set governor of lcore 0 to userspace 00:04:48.225 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:48.225 POWER: Cannot set governor of lcore 0 to userspace 00:04:48.225 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:48.225 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:48.225 POWER: Unable to set Power Management Environment for lcore 0 00:04:48.225 [2024-07-21 16:20:06.168951] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:48.225 [2024-07-21 16:20:06.168964] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:48.225 [2024-07-21 16:20:06.168973] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:48.225 [2024-07-21 16:20:06.168983] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:48.225 [2024-07-21 16:20:06.168991] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:48.225 [2024-07-21 16:20:06.168997] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:48.225 16:20:06 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.225 16:20:06 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:48.225 16:20:06 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.225 16:20:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.225 [2024-07-21 16:20:06.265598] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:48.225 16:20:06 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.225 16:20:06 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:48.225 16:20:06 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.225 16:20:06 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.225 16:20:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.225 ************************************ 00:04:48.225 START TEST scheduler_create_thread 00:04:48.225 ************************************ 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.225 2 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.225 3 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.225 4 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.225 5 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.225 6 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.225 7 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.225 8 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.225 9 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.225 10 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.225 16:20:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.137 16:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.137 16:20:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:50.137 16:20:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:50.137 16:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.137 16:20:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.703 16:20:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.703 00:04:50.703 real 0m2.615s 00:04:50.703 user 0m0.017s 00:04:50.703 sys 0m0.009s 00:04:50.703 16:20:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.703 16:20:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.703 ************************************ 00:04:50.703 END TEST scheduler_create_thread 00:04:50.703 ************************************ 00:04:50.961 16:20:08 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:50.961 16:20:08 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:50.961 16:20:08 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 62184 00:04:50.961 16:20:08 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 62184 ']' 00:04:50.961 16:20:08 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 62184 00:04:50.961 16:20:08 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:50.961 16:20:08 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:50.961 16:20:08 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62184 00:04:50.961 killing process with pid 62184 00:04:50.961 16:20:08 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:50.961 16:20:08 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:50.961 16:20:08 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62184' 00:04:50.961 16:20:08 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 62184 00:04:50.961 16:20:08 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 62184 00:04:51.220 [2024-07-21 16:20:09.373727] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:51.479 ************************************ 00:04:51.479 END TEST event_scheduler 00:04:51.479 ************************************ 00:04:51.479 00:04:51.479 real 0m4.600s 00:04:51.479 user 0m8.725s 00:04:51.479 sys 0m0.399s 00:04:51.479 16:20:09 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.479 16:20:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:51.479 16:20:09 event -- common/autotest_common.sh@1142 -- # return 0 00:04:51.479 16:20:09 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:51.479 16:20:09 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:51.479 16:20:09 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.479 16:20:09 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.479 16:20:09 event -- common/autotest_common.sh@10 -- # set +x 00:04:51.479 ************************************ 00:04:51.479 START TEST app_repeat 00:04:51.479 ************************************ 00:04:51.479 16:20:09 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:51.479 16:20:09 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.479 16:20:09 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.479 16:20:09 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:51.479 16:20:09 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.479 16:20:09 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:51.479 16:20:09 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:51.479 16:20:09 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:51.479 Process app_repeat pid: 62296 00:04:51.479 spdk_app_start Round 0 00:04:51.479 16:20:09 event.app_repeat -- event/event.sh@19 -- # repeat_pid=62296 00:04:51.479 16:20:09 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:51.479 16:20:09 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.479 16:20:09 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 62296' 00:04:51.479 16:20:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:51.479 16:20:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:51.479 16:20:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62296 /var/tmp/spdk-nbd.sock 00:04:51.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:51.479 16:20:09 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62296 ']' 00:04:51.479 16:20:09 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:51.479 16:20:09 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.479 16:20:09 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:51.479 16:20:09 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.479 16:20:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:51.738 [2024-07-21 16:20:09.698686] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:04:51.738 [2024-07-21 16:20:09.698779] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62296 ] 00:04:51.738 [2024-07-21 16:20:09.834788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:51.997 [2024-07-21 16:20:09.953518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.997 [2024-07-21 16:20:09.953528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.564 16:20:10 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.564 16:20:10 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:52.564 16:20:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.823 Malloc0 00:04:52.823 16:20:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.082 Malloc1 00:04:53.082 16:20:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.082 16:20:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.082 16:20:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.082 16:20:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:53.082 16:20:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.082 16:20:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:53.082 16:20:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.082 16:20:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.082 16:20:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.082 16:20:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:53.082 16:20:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.082 16:20:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:53.082 16:20:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:53.082 16:20:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:53.082 16:20:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.082 16:20:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:53.341 /dev/nbd0 00:04:53.341 16:20:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:53.341 16:20:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:53.341 16:20:11 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:53.341 16:20:11 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:53.341 16:20:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:53.341 16:20:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:53.341 16:20:11 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:53.341 16:20:11 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:53.341 16:20:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:53.341 16:20:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:53.341 16:20:11 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.341 1+0 records in 00:04:53.341 1+0 records out 00:04:53.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236692 s, 17.3 MB/s 00:04:53.341 16:20:11 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.341 16:20:11 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:53.341 16:20:11 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.341 16:20:11 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:53.341 16:20:11 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:53.341 16:20:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.341 16:20:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.341 16:20:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:53.908 /dev/nbd1 00:04:53.908 16:20:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:53.908 16:20:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:53.908 16:20:11 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:53.908 16:20:11 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:53.908 16:20:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:53.908 16:20:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:53.908 16:20:11 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:53.908 16:20:11 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:53.908 16:20:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:53.908 16:20:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:53.908 16:20:11 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.908 1+0 records in 00:04:53.908 1+0 records out 00:04:53.908 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336803 s, 12.2 MB/s 00:04:53.908 16:20:11 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.908 16:20:11 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:53.908 16:20:11 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.908 16:20:11 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:53.908 16:20:11 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:53.908 16:20:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.908 16:20:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.908 16:20:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.908 16:20:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.908 16:20:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:54.167 { 00:04:54.167 "bdev_name": "Malloc0", 00:04:54.167 "nbd_device": "/dev/nbd0" 00:04:54.167 }, 00:04:54.167 { 00:04:54.167 "bdev_name": "Malloc1", 00:04:54.167 "nbd_device": "/dev/nbd1" 00:04:54.167 } 00:04:54.167 ]' 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:54.167 { 00:04:54.167 "bdev_name": "Malloc0", 00:04:54.167 "nbd_device": "/dev/nbd0" 00:04:54.167 }, 00:04:54.167 { 00:04:54.167 "bdev_name": "Malloc1", 00:04:54.167 "nbd_device": "/dev/nbd1" 00:04:54.167 } 00:04:54.167 ]' 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:54.167 /dev/nbd1' 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:54.167 /dev/nbd1' 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:54.167 256+0 records in 00:04:54.167 256+0 records out 00:04:54.167 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010542 s, 99.5 MB/s 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:54.167 256+0 records in 00:04:54.167 256+0 records out 00:04:54.167 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243239 s, 43.1 MB/s 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:54.167 256+0 records in 00:04:54.167 256+0 records out 00:04:54.167 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267693 s, 39.2 MB/s 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:54.167 16:20:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.168 16:20:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.168 16:20:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:54.168 16:20:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:54.168 16:20:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.168 16:20:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:54.427 16:20:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:54.427 16:20:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:54.427 16:20:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:54.427 16:20:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.427 16:20:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.427 16:20:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:54.427 16:20:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:54.427 16:20:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.427 16:20:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.427 16:20:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:54.994 16:20:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:54.994 16:20:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:54.994 16:20:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:54.994 16:20:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.994 16:20:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.994 16:20:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:54.994 16:20:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:54.994 16:20:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.994 16:20:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.994 16:20:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.994 16:20:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.994 16:20:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:54.994 16:20:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:54.994 16:20:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.994 16:20:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:54.994 16:20:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.994 16:20:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:54.994 16:20:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:54.994 16:20:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:54.994 16:20:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:54.994 16:20:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:54.994 16:20:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:54.994 16:20:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:54.994 16:20:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:55.559 16:20:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:55.559 [2024-07-21 16:20:13.705150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.817 [2024-07-21 16:20:13.811239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.817 [2024-07-21 16:20:13.811232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.817 [2024-07-21 16:20:13.871226] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:55.817 [2024-07-21 16:20:13.871288] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:58.398 16:20:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:58.398 spdk_app_start Round 1 00:04:58.398 16:20:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:58.399 16:20:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62296 /var/tmp/spdk-nbd.sock 00:04:58.399 16:20:16 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62296 ']' 00:04:58.399 16:20:16 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:58.399 16:20:16 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:58.399 16:20:16 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:58.399 16:20:16 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.399 16:20:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:58.684 16:20:16 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.684 16:20:16 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:58.684 16:20:16 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.943 Malloc0 00:04:58.943 16:20:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.201 Malloc1 00:04:59.201 16:20:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.201 16:20:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.201 16:20:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.201 16:20:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:59.201 16:20:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.201 16:20:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:59.201 16:20:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.201 16:20:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.201 16:20:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.201 16:20:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:59.201 16:20:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.201 16:20:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:59.201 16:20:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:59.201 16:20:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:59.201 16:20:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.201 16:20:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:59.458 /dev/nbd0 00:04:59.458 16:20:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:59.458 16:20:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:59.458 16:20:17 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:59.458 16:20:17 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:59.458 16:20:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:59.458 16:20:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:59.459 16:20:17 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:59.459 16:20:17 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:59.459 16:20:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:59.459 16:20:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:59.459 16:20:17 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.459 1+0 records in 00:04:59.459 1+0 records out 00:04:59.459 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310233 s, 13.2 MB/s 00:04:59.459 16:20:17 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.459 16:20:17 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:59.459 16:20:17 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.459 16:20:17 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:59.459 16:20:17 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:59.459 16:20:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.459 16:20:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.459 16:20:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:59.717 /dev/nbd1 00:04:59.717 16:20:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:59.717 16:20:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:59.717 16:20:17 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:59.717 16:20:17 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:59.717 16:20:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:59.717 16:20:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:59.717 16:20:17 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:59.717 16:20:17 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:59.717 16:20:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:59.717 16:20:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:59.717 16:20:17 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.717 1+0 records in 00:04:59.717 1+0 records out 00:04:59.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509361 s, 8.0 MB/s 00:04:59.717 16:20:17 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.717 16:20:17 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:59.717 16:20:17 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.717 16:20:17 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:59.717 16:20:17 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:59.717 16:20:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.717 16:20:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.717 16:20:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.717 16:20:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.717 16:20:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.975 16:20:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:59.975 { 00:04:59.975 "bdev_name": "Malloc0", 00:04:59.975 "nbd_device": "/dev/nbd0" 00:04:59.975 }, 00:04:59.975 { 00:04:59.975 "bdev_name": "Malloc1", 00:04:59.975 "nbd_device": "/dev/nbd1" 00:04:59.975 } 00:04:59.975 ]' 00:04:59.975 16:20:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:59.975 { 00:04:59.975 "bdev_name": "Malloc0", 00:04:59.975 "nbd_device": "/dev/nbd0" 00:04:59.975 }, 00:04:59.975 { 00:04:59.975 "bdev_name": "Malloc1", 00:04:59.975 "nbd_device": "/dev/nbd1" 00:04:59.975 } 00:04:59.975 ]' 00:04:59.975 16:20:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.975 16:20:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:59.975 /dev/nbd1' 00:04:59.975 16:20:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.975 16:20:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:59.975 /dev/nbd1' 00:04:59.975 16:20:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:59.975 16:20:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:00.232 16:20:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:00.232 16:20:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:00.232 16:20:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:00.232 16:20:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.232 16:20:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.232 16:20:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:00.232 16:20:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.232 16:20:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:00.232 16:20:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:00.232 256+0 records in 00:05:00.232 256+0 records out 00:05:00.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00703906 s, 149 MB/s 00:05:00.232 16:20:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.232 16:20:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:00.232 256+0 records in 00:05:00.232 256+0 records out 00:05:00.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277339 s, 37.8 MB/s 00:05:00.232 16:20:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.232 16:20:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:00.232 256+0 records in 00:05:00.232 256+0 records out 00:05:00.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299641 s, 35.0 MB/s 00:05:00.232 16:20:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:00.232 16:20:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.232 16:20:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.232 16:20:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:00.232 16:20:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.233 16:20:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:00.233 16:20:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:00.233 16:20:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.233 16:20:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:00.233 16:20:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.233 16:20:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:00.233 16:20:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.233 16:20:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:00.233 16:20:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.233 16:20:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.233 16:20:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:00.233 16:20:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:00.233 16:20:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.233 16:20:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:00.490 16:20:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:00.490 16:20:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:00.490 16:20:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:00.490 16:20:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.490 16:20:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.490 16:20:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:00.490 16:20:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:00.490 16:20:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.491 16:20:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.491 16:20:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:00.748 16:20:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:00.748 16:20:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:00.748 16:20:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:00.748 16:20:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.748 16:20:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.748 16:20:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:00.748 16:20:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:00.748 16:20:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.748 16:20:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.748 16:20:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.748 16:20:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.006 16:20:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:01.006 16:20:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:01.006 16:20:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.006 16:20:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:01.006 16:20:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:01.006 16:20:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.006 16:20:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:01.006 16:20:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:01.006 16:20:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:01.006 16:20:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:01.006 16:20:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:01.006 16:20:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:01.006 16:20:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:01.265 16:20:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:01.524 [2024-07-21 16:20:19.679928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.782 [2024-07-21 16:20:19.806815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.782 [2024-07-21 16:20:19.806824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.782 [2024-07-21 16:20:19.869441] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:01.782 [2024-07-21 16:20:19.869517] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:04.310 16:20:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:04.310 spdk_app_start Round 2 00:05:04.310 16:20:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:04.310 16:20:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62296 /var/tmp/spdk-nbd.sock 00:05:04.310 16:20:22 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62296 ']' 00:05:04.310 16:20:22 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:04.310 16:20:22 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:04.310 16:20:22 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:04.310 16:20:22 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.310 16:20:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:04.568 16:20:22 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.568 16:20:22 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:04.568 16:20:22 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.825 Malloc0 00:05:04.825 16:20:23 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.390 Malloc1 00:05:05.390 16:20:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.390 16:20:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.390 16:20:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.390 16:20:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:05.390 16:20:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.390 16:20:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:05.390 16:20:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.390 16:20:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.390 16:20:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.390 16:20:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:05.390 16:20:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.390 16:20:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:05.390 16:20:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:05.390 16:20:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:05.390 16:20:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.390 16:20:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:05.390 /dev/nbd0 00:05:05.390 16:20:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:05.390 16:20:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:05.390 16:20:23 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:05.390 16:20:23 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:05.390 16:20:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:05.390 16:20:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:05.390 16:20:23 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:05.390 16:20:23 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:05.390 16:20:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:05.390 16:20:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:05.390 16:20:23 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.390 1+0 records in 00:05:05.390 1+0 records out 00:05:05.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244404 s, 16.8 MB/s 00:05:05.390 16:20:23 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.390 16:20:23 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:05.390 16:20:23 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.390 16:20:23 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:05.390 16:20:23 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:05.390 16:20:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.390 16:20:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.390 16:20:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:05.654 /dev/nbd1 00:05:05.654 16:20:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:05.654 16:20:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:05.654 16:20:23 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:05.654 16:20:23 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:05.654 16:20:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:05.654 16:20:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:05.654 16:20:23 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:05.654 16:20:23 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:05.654 16:20:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:05.654 16:20:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:05.654 16:20:23 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.654 1+0 records in 00:05:05.654 1+0 records out 00:05:05.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253912 s, 16.1 MB/s 00:05:05.654 16:20:23 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.654 16:20:23 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:05.654 16:20:23 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.654 16:20:23 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:05.654 16:20:23 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:05.654 16:20:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.654 16:20:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.924 16:20:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.924 16:20:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.924 16:20:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.924 16:20:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:05.924 { 00:05:05.924 "bdev_name": "Malloc0", 00:05:05.924 "nbd_device": "/dev/nbd0" 00:05:05.924 }, 00:05:05.924 { 00:05:05.924 "bdev_name": "Malloc1", 00:05:05.924 "nbd_device": "/dev/nbd1" 00:05:05.924 } 00:05:05.924 ]' 00:05:05.924 16:20:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:05.924 { 00:05:05.924 "bdev_name": "Malloc0", 00:05:05.924 "nbd_device": "/dev/nbd0" 00:05:05.924 }, 00:05:05.924 { 00:05:05.924 "bdev_name": "Malloc1", 00:05:05.924 "nbd_device": "/dev/nbd1" 00:05:05.924 } 00:05:05.924 ]' 00:05:05.924 16:20:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.924 16:20:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:05.924 /dev/nbd1' 00:05:05.924 16:20:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:05.924 /dev/nbd1' 00:05:05.924 16:20:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.180 16:20:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:06.181 256+0 records in 00:05:06.181 256+0 records out 00:05:06.181 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00972409 s, 108 MB/s 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:06.181 256+0 records in 00:05:06.181 256+0 records out 00:05:06.181 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027133 s, 38.6 MB/s 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:06.181 256+0 records in 00:05:06.181 256+0 records out 00:05:06.181 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0281885 s, 37.2 MB/s 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.181 16:20:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:06.438 16:20:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:06.438 16:20:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:06.438 16:20:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:06.438 16:20:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.438 16:20:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.438 16:20:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:06.438 16:20:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.438 16:20:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.438 16:20:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.438 16:20:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:06.695 16:20:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:06.695 16:20:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:06.695 16:20:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:06.695 16:20:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.695 16:20:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.695 16:20:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:06.695 16:20:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.695 16:20:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.695 16:20:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.695 16:20:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.695 16:20:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.953 16:20:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:06.953 16:20:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:06.953 16:20:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.953 16:20:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:06.953 16:20:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:06.953 16:20:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.953 16:20:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:06.953 16:20:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:06.953 16:20:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:06.953 16:20:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:06.953 16:20:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:06.953 16:20:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:06.953 16:20:25 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:07.211 16:20:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:07.469 [2024-07-21 16:20:25.502736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:07.469 [2024-07-21 16:20:25.596383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.469 [2024-07-21 16:20:25.596386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.469 [2024-07-21 16:20:25.656017] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:07.469 [2024-07-21 16:20:25.656123] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:10.747 16:20:28 event.app_repeat -- event/event.sh@38 -- # waitforlisten 62296 /var/tmp/spdk-nbd.sock 00:05:10.747 16:20:28 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62296 ']' 00:05:10.747 16:20:28 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:10.747 16:20:28 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:10.747 16:20:28 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:10.747 16:20:28 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.747 16:20:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:10.747 16:20:28 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.747 16:20:28 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:10.747 16:20:28 event.app_repeat -- event/event.sh@39 -- # killprocess 62296 00:05:10.747 16:20:28 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 62296 ']' 00:05:10.747 16:20:28 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 62296 00:05:10.747 16:20:28 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:10.747 16:20:28 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.747 16:20:28 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62296 00:05:10.747 16:20:28 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:10.747 killing process with pid 62296 00:05:10.747 16:20:28 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:10.747 16:20:28 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62296' 00:05:10.747 16:20:28 event.app_repeat -- common/autotest_common.sh@967 -- # kill 62296 00:05:10.747 16:20:28 event.app_repeat -- common/autotest_common.sh@972 -- # wait 62296 00:05:10.747 spdk_app_start is called in Round 0. 00:05:10.747 Shutdown signal received, stop current app iteration 00:05:10.747 Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 reinitialization... 00:05:10.747 spdk_app_start is called in Round 1. 00:05:10.747 Shutdown signal received, stop current app iteration 00:05:10.747 Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 reinitialization... 00:05:10.747 spdk_app_start is called in Round 2. 00:05:10.747 Shutdown signal received, stop current app iteration 00:05:10.747 Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 reinitialization... 00:05:10.747 spdk_app_start is called in Round 3. 00:05:10.747 Shutdown signal received, stop current app iteration 00:05:10.747 16:20:28 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:10.747 16:20:28 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:10.747 00:05:10.747 real 0m19.159s 00:05:10.747 user 0m42.689s 00:05:10.747 sys 0m3.128s 00:05:10.747 16:20:28 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.747 16:20:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:10.747 ************************************ 00:05:10.747 END TEST app_repeat 00:05:10.747 ************************************ 00:05:10.747 16:20:28 event -- common/autotest_common.sh@1142 -- # return 0 00:05:10.747 16:20:28 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:10.747 16:20:28 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:10.747 16:20:28 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.747 16:20:28 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.747 16:20:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.747 ************************************ 00:05:10.747 START TEST cpu_locks 00:05:10.747 ************************************ 00:05:10.747 16:20:28 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:10.747 * Looking for test storage... 00:05:11.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:11.006 16:20:28 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:11.006 16:20:28 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:11.006 16:20:28 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:11.006 16:20:28 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:11.006 16:20:28 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.006 16:20:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.006 16:20:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.006 ************************************ 00:05:11.006 START TEST default_locks 00:05:11.006 ************************************ 00:05:11.006 16:20:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:11.006 16:20:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62921 00:05:11.006 16:20:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 62921 00:05:11.006 16:20:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.006 16:20:28 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62921 ']' 00:05:11.006 16:20:28 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.006 16:20:28 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.006 16:20:28 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.006 16:20:28 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.006 16:20:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.006 [2024-07-21 16:20:29.041977] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:11.006 [2024-07-21 16:20:29.042086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62921 ] 00:05:11.006 [2024-07-21 16:20:29.180609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.263 [2024-07-21 16:20:29.299472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.828 16:20:29 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.828 16:20:29 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:11.828 16:20:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 62921 00:05:11.828 16:20:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 62921 00:05:11.828 16:20:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.394 16:20:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 62921 00:05:12.394 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 62921 ']' 00:05:12.394 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 62921 00:05:12.394 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:12.394 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.394 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62921 00:05:12.394 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.394 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.394 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62921' 00:05:12.394 killing process with pid 62921 00:05:12.394 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 62921 00:05:12.394 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 62921 00:05:12.653 16:20:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62921 00:05:12.653 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:12.653 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62921 00:05:12.653 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:12.653 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:12.653 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:12.653 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:12.653 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 62921 00:05:12.653 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62921 ']' 00:05:12.653 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.653 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.653 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.653 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.653 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.653 ERROR: process (pid: 62921) is no longer running 00:05:12.653 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62921) - No such process 00:05:12.653 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.653 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:12.653 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:12.653 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:12.653 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:12.653 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:12.653 16:20:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:12.653 16:20:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:12.653 16:20:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:12.653 16:20:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:12.653 00:05:12.653 real 0m1.766s 00:05:12.653 user 0m1.853s 00:05:12.653 sys 0m0.526s 00:05:12.653 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.653 16:20:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.653 ************************************ 00:05:12.653 END TEST default_locks 00:05:12.653 ************************************ 00:05:12.653 16:20:30 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:12.653 16:20:30 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:12.653 16:20:30 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.653 16:20:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.653 16:20:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.653 ************************************ 00:05:12.653 START TEST default_locks_via_rpc 00:05:12.653 ************************************ 00:05:12.653 16:20:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:12.653 16:20:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62985 00:05:12.653 16:20:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 62985 00:05:12.653 16:20:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62985 ']' 00:05:12.653 16:20:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.653 16:20:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.653 16:20:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.653 16:20:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.653 16:20:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.653 16:20:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.653 [2024-07-21 16:20:30.855740] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:12.653 [2024-07-21 16:20:30.855864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62985 ] 00:05:12.912 [2024-07-21 16:20:30.990225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.912 [2024-07-21 16:20:31.093683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.846 16:20:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.846 16:20:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:13.846 16:20:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:13.846 16:20:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.846 16:20:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.846 16:20:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.846 16:20:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:13.846 16:20:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:13.846 16:20:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:13.846 16:20:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:13.846 16:20:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:13.846 16:20:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.846 16:20:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.846 16:20:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.846 16:20:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 62985 00:05:13.846 16:20:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:13.846 16:20:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 62985 00:05:14.167 16:20:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 62985 00:05:14.167 16:20:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 62985 ']' 00:05:14.167 16:20:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 62985 00:05:14.167 16:20:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:14.167 16:20:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:14.167 16:20:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62985 00:05:14.167 16:20:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:14.167 16:20:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:14.167 killing process with pid 62985 00:05:14.167 16:20:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62985' 00:05:14.167 16:20:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 62985 00:05:14.167 16:20:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 62985 00:05:14.748 00:05:14.748 real 0m1.924s 00:05:14.748 user 0m2.081s 00:05:14.748 sys 0m0.588s 00:05:14.748 16:20:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.748 ************************************ 00:05:14.748 END TEST default_locks_via_rpc 00:05:14.748 ************************************ 00:05:14.748 16:20:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.748 16:20:32 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:14.748 16:20:32 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:14.748 16:20:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.748 16:20:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.748 16:20:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.748 ************************************ 00:05:14.748 START TEST non_locking_app_on_locked_coremask 00:05:14.748 ************************************ 00:05:14.748 16:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:14.748 16:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=63055 00:05:14.748 16:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 63055 /var/tmp/spdk.sock 00:05:14.748 16:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63055 ']' 00:05:14.748 16:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.748 16:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.748 16:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.748 16:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.748 16:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.748 16:20:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.748 [2024-07-21 16:20:32.850605] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:14.748 [2024-07-21 16:20:32.850751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63055 ] 00:05:15.006 [2024-07-21 16:20:32.993324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.006 [2024-07-21 16:20:33.120133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.940 16:20:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.940 16:20:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:15.940 16:20:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=63083 00:05:15.940 16:20:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 63083 /var/tmp/spdk2.sock 00:05:15.940 16:20:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63083 ']' 00:05:15.940 16:20:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.940 16:20:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.940 16:20:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.940 16:20:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.940 16:20:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.940 16:20:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:15.941 [2024-07-21 16:20:33.875894] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:15.941 [2024-07-21 16:20:33.875996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63083 ] 00:05:15.941 [2024-07-21 16:20:34.023342] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:15.941 [2024-07-21 16:20:34.023409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.198 [2024-07-21 16:20:34.263730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.765 16:20:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.765 16:20:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:16.765 16:20:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 63055 00:05:16.765 16:20:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63055 00:05:16.765 16:20:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.697 16:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 63055 00:05:17.697 16:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63055 ']' 00:05:17.697 16:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63055 00:05:17.697 16:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:17.697 16:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.697 16:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63055 00:05:17.697 16:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.697 killing process with pid 63055 00:05:17.697 16:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.697 16:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63055' 00:05:17.697 16:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63055 00:05:17.697 16:20:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63055 00:05:18.630 16:20:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 63083 00:05:18.630 16:20:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63083 ']' 00:05:18.630 16:20:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63083 00:05:18.630 16:20:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:18.630 16:20:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:18.630 16:20:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63083 00:05:18.630 16:20:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:18.630 killing process with pid 63083 00:05:18.630 16:20:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:18.630 16:20:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63083' 00:05:18.630 16:20:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63083 00:05:18.630 16:20:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63083 00:05:18.888 ************************************ 00:05:18.888 END TEST non_locking_app_on_locked_coremask 00:05:18.888 ************************************ 00:05:18.888 00:05:18.888 real 0m4.230s 00:05:18.888 user 0m4.722s 00:05:18.888 sys 0m1.201s 00:05:18.888 16:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.888 16:20:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.888 16:20:37 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:18.888 16:20:37 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:18.888 16:20:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.888 16:20:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.888 16:20:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.888 ************************************ 00:05:18.888 START TEST locking_app_on_unlocked_coremask 00:05:18.888 ************************************ 00:05:18.888 16:20:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:18.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.888 16:20:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=63162 00:05:18.888 16:20:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 63162 /var/tmp/spdk.sock 00:05:18.888 16:20:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:18.888 16:20:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63162 ']' 00:05:18.888 16:20:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.888 16:20:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.888 16:20:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.888 16:20:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.888 16:20:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.146 [2024-07-21 16:20:37.128685] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:19.146 [2024-07-21 16:20:37.128782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63162 ] 00:05:19.146 [2024-07-21 16:20:37.264493] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:19.146 [2024-07-21 16:20:37.264530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.403 [2024-07-21 16:20:37.363185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.337 16:20:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.337 16:20:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:20.337 16:20:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=63190 00:05:20.337 16:20:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 63190 /var/tmp/spdk2.sock 00:05:20.337 16:20:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:20.337 16:20:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63190 ']' 00:05:20.337 16:20:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.337 16:20:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.337 16:20:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.337 16:20:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.337 16:20:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.337 [2024-07-21 16:20:38.245346] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:20.337 [2024-07-21 16:20:38.246319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63190 ] 00:05:20.337 [2024-07-21 16:20:38.391571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.595 [2024-07-21 16:20:38.599998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.161 16:20:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.161 16:20:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:21.161 16:20:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 63190 00:05:21.161 16:20:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63190 00:05:21.161 16:20:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:22.136 16:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 63162 00:05:22.136 16:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63162 ']' 00:05:22.136 16:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 63162 00:05:22.136 16:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:22.136 16:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:22.136 16:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63162 00:05:22.136 16:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:22.136 16:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:22.136 killing process with pid 63162 00:05:22.136 16:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63162' 00:05:22.136 16:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 63162 00:05:22.136 16:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 63162 00:05:22.705 16:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 63190 00:05:22.705 16:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63190 ']' 00:05:22.705 16:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 63190 00:05:22.705 16:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:22.705 16:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:22.705 16:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63190 00:05:22.705 16:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:22.705 16:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:22.705 16:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63190' 00:05:22.705 killing process with pid 63190 00:05:22.705 16:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 63190 00:05:22.705 16:20:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 63190 00:05:23.271 ************************************ 00:05:23.271 END TEST locking_app_on_unlocked_coremask 00:05:23.271 ************************************ 00:05:23.271 00:05:23.271 real 0m4.212s 00:05:23.271 user 0m4.742s 00:05:23.271 sys 0m1.169s 00:05:23.271 16:20:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.271 16:20:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.271 16:20:41 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:23.271 16:20:41 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:23.271 16:20:41 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.272 16:20:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.272 16:20:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.272 ************************************ 00:05:23.272 START TEST locking_app_on_locked_coremask 00:05:23.272 ************************************ 00:05:23.272 16:20:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:23.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.272 16:20:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=63269 00:05:23.272 16:20:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 63269 /var/tmp/spdk.sock 00:05:23.272 16:20:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63269 ']' 00:05:23.272 16:20:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.272 16:20:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.272 16:20:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.272 16:20:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.272 16:20:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.272 16:20:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.272 [2024-07-21 16:20:41.400304] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:23.272 [2024-07-21 16:20:41.400445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63269 ] 00:05:23.530 [2024-07-21 16:20:41.538508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.530 [2024-07-21 16:20:41.643590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.464 16:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.464 16:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:24.464 16:20:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:24.464 16:20:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=63297 00:05:24.464 16:20:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 63297 /var/tmp/spdk2.sock 00:05:24.464 16:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:24.464 16:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63297 /var/tmp/spdk2.sock 00:05:24.464 16:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:24.464 16:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.464 16:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:24.464 16:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.464 16:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63297 /var/tmp/spdk2.sock 00:05:24.464 16:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63297 ']' 00:05:24.464 16:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.464 16:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.464 16:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.464 16:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.464 16:20:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.464 [2024-07-21 16:20:42.416657] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:24.464 [2024-07-21 16:20:42.417038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63297 ] 00:05:24.464 [2024-07-21 16:20:42.558843] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 63269 has claimed it. 00:05:24.464 [2024-07-21 16:20:42.558954] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:25.029 ERROR: process (pid: 63297) is no longer running 00:05:25.029 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63297) - No such process 00:05:25.029 16:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.029 16:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:25.029 16:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:25.029 16:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:25.029 16:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:25.029 16:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:25.029 16:20:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 63269 00:05:25.029 16:20:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63269 00:05:25.030 16:20:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:25.301 16:20:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 63269 00:05:25.301 16:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63269 ']' 00:05:25.301 16:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63269 00:05:25.301 16:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:25.301 16:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:25.301 16:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63269 00:05:25.301 killing process with pid 63269 00:05:25.301 16:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:25.301 16:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:25.301 16:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63269' 00:05:25.301 16:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63269 00:05:25.301 16:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63269 00:05:25.866 00:05:25.866 real 0m2.545s 00:05:25.866 user 0m2.914s 00:05:25.866 sys 0m0.605s 00:05:25.866 16:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.866 16:20:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.866 ************************************ 00:05:25.866 END TEST locking_app_on_locked_coremask 00:05:25.866 ************************************ 00:05:25.866 16:20:43 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:25.866 16:20:43 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:25.866 16:20:43 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.866 16:20:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.866 16:20:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.866 ************************************ 00:05:25.866 START TEST locking_overlapped_coremask 00:05:25.866 ************************************ 00:05:25.866 16:20:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:25.866 16:20:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=63354 00:05:25.866 16:20:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 63354 /var/tmp/spdk.sock 00:05:25.866 16:20:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:25.866 16:20:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 63354 ']' 00:05:25.866 16:20:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.866 16:20:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.866 16:20:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.866 16:20:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.866 16:20:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.866 [2024-07-21 16:20:43.995937] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:25.866 [2024-07-21 16:20:43.996397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63354 ] 00:05:26.123 [2024-07-21 16:20:44.130675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:26.123 [2024-07-21 16:20:44.246047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.123 [2024-07-21 16:20:44.246197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.123 [2024-07-21 16:20:44.246200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.056 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.056 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:27.056 16:20:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:27.057 16:20:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=63384 00:05:27.057 16:20:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 63384 /var/tmp/spdk2.sock 00:05:27.057 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:27.057 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63384 /var/tmp/spdk2.sock 00:05:27.057 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:27.057 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.057 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:27.057 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.057 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63384 /var/tmp/spdk2.sock 00:05:27.057 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 63384 ']' 00:05:27.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.057 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.057 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.057 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.057 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.057 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.057 [2024-07-21 16:20:45.062578] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:27.057 [2024-07-21 16:20:45.062691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63384 ] 00:05:27.057 [2024-07-21 16:20:45.205178] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63354 has claimed it. 00:05:27.057 [2024-07-21 16:20:45.205266] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:27.635 ERROR: process (pid: 63384) is no longer running 00:05:27.635 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63384) - No such process 00:05:27.635 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.635 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:27.635 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:27.635 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:27.635 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:27.635 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:27.635 16:20:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:27.635 16:20:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:27.635 16:20:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:27.635 16:20:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:27.635 16:20:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 63354 00:05:27.635 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 63354 ']' 00:05:27.636 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 63354 00:05:27.636 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:27.636 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:27.636 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63354 00:05:27.636 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:27.636 killing process with pid 63354 00:05:27.636 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:27.636 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63354' 00:05:27.636 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 63354 00:05:27.636 16:20:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 63354 00:05:28.199 00:05:28.199 real 0m2.312s 00:05:28.199 user 0m6.423s 00:05:28.199 sys 0m0.483s 00:05:28.199 16:20:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.199 16:20:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.199 ************************************ 00:05:28.199 END TEST locking_overlapped_coremask 00:05:28.199 ************************************ 00:05:28.199 16:20:46 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:28.199 16:20:46 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:28.199 16:20:46 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.199 16:20:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.199 16:20:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.199 ************************************ 00:05:28.199 START TEST locking_overlapped_coremask_via_rpc 00:05:28.199 ************************************ 00:05:28.199 16:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:28.199 16:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63430 00:05:28.199 16:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 63430 /var/tmp/spdk.sock 00:05:28.199 16:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63430 ']' 00:05:28.199 16:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.199 16:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:28.199 16:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.199 16:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.199 16:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.199 16:20:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.199 [2024-07-21 16:20:46.362740] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:28.199 [2024-07-21 16:20:46.362830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63430 ] 00:05:28.457 [2024-07-21 16:20:46.501355] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:28.457 [2024-07-21 16:20:46.501409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.457 [2024-07-21 16:20:46.597351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.457 [2024-07-21 16:20:46.597469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.457 [2024-07-21 16:20:46.597473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.391 16:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.391 16:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:29.391 16:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63460 00:05:29.391 16:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:29.391 16:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 63460 /var/tmp/spdk2.sock 00:05:29.391 16:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63460 ']' 00:05:29.391 16:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.391 16:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.391 16:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.391 16:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.391 16:20:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.391 [2024-07-21 16:20:47.383014] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:29.391 [2024-07-21 16:20:47.383111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63460 ] 00:05:29.391 [2024-07-21 16:20:47.529246] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:29.391 [2024-07-21 16:20:47.529303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:29.649 [2024-07-21 16:20:47.791326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.649 [2024-07-21 16:20:47.794377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.649 [2024-07-21 16:20:47.794377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:30.222 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.222 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:30.222 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:30.222 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.222 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.512 [2024-07-21 16:20:48.437384] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63430 has claimed it. 00:05:30.512 2024/07/21 16:20:48 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:05:30.512 request: 00:05:30.512 { 00:05:30.512 "method": "framework_enable_cpumask_locks", 00:05:30.512 "params": {} 00:05:30.512 } 00:05:30.512 Got JSON-RPC error response 00:05:30.512 GoRPCClient: error on JSON-RPC call 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 63430 /var/tmp/spdk.sock 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63430 ']' 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 63460 /var/tmp/spdk2.sock 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63460 ']' 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.512 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.079 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.079 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:31.079 ************************************ 00:05:31.079 END TEST locking_overlapped_coremask_via_rpc 00:05:31.079 ************************************ 00:05:31.079 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:31.079 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:31.079 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:31.079 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:31.079 00:05:31.079 real 0m2.689s 00:05:31.079 user 0m1.407s 00:05:31.079 sys 0m0.230s 00:05:31.079 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.079 16:20:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.079 16:20:49 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:31.079 16:20:49 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:31.079 16:20:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63430 ]] 00:05:31.079 16:20:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63430 00:05:31.079 16:20:49 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63430 ']' 00:05:31.079 16:20:49 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63430 00:05:31.079 16:20:49 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:31.079 16:20:49 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:31.079 16:20:49 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63430 00:05:31.079 killing process with pid 63430 00:05:31.079 16:20:49 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:31.079 16:20:49 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:31.079 16:20:49 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63430' 00:05:31.079 16:20:49 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63430 00:05:31.079 16:20:49 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63430 00:05:31.337 16:20:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63460 ]] 00:05:31.338 16:20:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63460 00:05:31.338 16:20:49 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63460 ']' 00:05:31.338 16:20:49 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63460 00:05:31.338 16:20:49 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:31.338 16:20:49 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:31.338 16:20:49 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63460 00:05:31.338 killing process with pid 63460 00:05:31.338 16:20:49 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:31.338 16:20:49 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:31.338 16:20:49 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63460' 00:05:31.338 16:20:49 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63460 00:05:31.338 16:20:49 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63460 00:05:31.905 16:20:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:31.905 16:20:49 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:31.905 16:20:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63430 ]] 00:05:31.905 16:20:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63430 00:05:31.905 16:20:49 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63430 ']' 00:05:31.905 16:20:49 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63430 00:05:31.905 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63430) - No such process 00:05:31.905 Process with pid 63430 is not found 00:05:31.905 16:20:49 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63430 is not found' 00:05:31.905 16:20:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63460 ]] 00:05:31.905 16:20:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63460 00:05:31.905 16:20:49 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63460 ']' 00:05:31.905 16:20:49 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63460 00:05:31.905 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63460) - No such process 00:05:31.905 Process with pid 63460 is not found 00:05:31.905 16:20:49 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63460 is not found' 00:05:31.905 16:20:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:31.905 00:05:31.905 real 0m21.003s 00:05:31.905 user 0m36.741s 00:05:31.905 sys 0m5.682s 00:05:31.905 16:20:49 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.905 16:20:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.905 ************************************ 00:05:31.905 END TEST cpu_locks 00:05:31.905 ************************************ 00:05:31.905 16:20:49 event -- common/autotest_common.sh@1142 -- # return 0 00:05:31.905 00:05:31.905 real 0m49.476s 00:05:31.905 user 1m34.988s 00:05:31.905 sys 0m9.701s 00:05:31.905 16:20:49 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.905 16:20:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.905 ************************************ 00:05:31.905 END TEST event 00:05:31.905 ************************************ 00:05:31.905 16:20:49 -- common/autotest_common.sh@1142 -- # return 0 00:05:31.905 16:20:49 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:31.905 16:20:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.905 16:20:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.905 16:20:49 -- common/autotest_common.sh@10 -- # set +x 00:05:31.905 ************************************ 00:05:31.905 START TEST thread 00:05:31.905 ************************************ 00:05:31.905 16:20:49 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:31.905 * Looking for test storage... 00:05:31.905 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:31.905 16:20:50 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:31.905 16:20:50 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:31.905 16:20:50 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.905 16:20:50 thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.905 ************************************ 00:05:31.905 START TEST thread_poller_perf 00:05:31.905 ************************************ 00:05:31.905 16:20:50 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:31.905 [2024-07-21 16:20:50.082651] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:31.905 [2024-07-21 16:20:50.082744] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63612 ] 00:05:32.164 [2024-07-21 16:20:50.226509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.164 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:32.164 [2024-07-21 16:20:50.344514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.540 ====================================== 00:05:33.540 busy:2206325724 (cyc) 00:05:33.540 total_run_count: 342000 00:05:33.540 tsc_hz: 2200000000 (cyc) 00:05:33.540 ====================================== 00:05:33.540 poller_cost: 6451 (cyc), 2932 (nsec) 00:05:33.540 00:05:33.540 real 0m1.357s 00:05:33.540 user 0m1.188s 00:05:33.540 sys 0m0.063s 00:05:33.540 16:20:51 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.540 16:20:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:33.540 ************************************ 00:05:33.540 END TEST thread_poller_perf 00:05:33.540 ************************************ 00:05:33.540 16:20:51 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:33.540 16:20:51 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:33.540 16:20:51 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:33.540 16:20:51 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.540 16:20:51 thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.540 ************************************ 00:05:33.540 START TEST thread_poller_perf 00:05:33.540 ************************************ 00:05:33.540 16:20:51 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:33.540 [2024-07-21 16:20:51.492852] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:33.540 [2024-07-21 16:20:51.492957] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63642 ] 00:05:33.540 [2024-07-21 16:20:51.630788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.540 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:33.540 [2024-07-21 16:20:51.723536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.918 ====================================== 00:05:34.918 busy:2202052657 (cyc) 00:05:34.918 total_run_count: 4329000 00:05:34.918 tsc_hz: 2200000000 (cyc) 00:05:34.918 ====================================== 00:05:34.918 poller_cost: 508 (cyc), 230 (nsec) 00:05:34.918 00:05:34.918 real 0m1.329s 00:05:34.918 user 0m1.171s 00:05:34.918 sys 0m0.051s 00:05:34.918 16:20:52 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.918 ************************************ 00:05:34.918 END TEST thread_poller_perf 00:05:34.918 16:20:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:34.918 ************************************ 00:05:34.918 16:20:52 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:34.918 16:20:52 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:34.918 00:05:34.918 real 0m2.877s 00:05:34.918 user 0m2.421s 00:05:34.918 sys 0m0.236s 00:05:34.918 16:20:52 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.918 ************************************ 00:05:34.918 END TEST thread 00:05:34.918 ************************************ 00:05:34.918 16:20:52 thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.918 16:20:52 -- common/autotest_common.sh@1142 -- # return 0 00:05:34.918 16:20:52 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:34.918 16:20:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.918 16:20:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.918 16:20:52 -- common/autotest_common.sh@10 -- # set +x 00:05:34.918 ************************************ 00:05:34.918 START TEST accel 00:05:34.918 ************************************ 00:05:34.918 16:20:52 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:34.918 * Looking for test storage... 00:05:34.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:34.918 16:20:52 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:34.918 16:20:52 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:34.918 16:20:52 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:34.918 16:20:52 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=63717 00:05:34.918 16:20:52 accel -- accel/accel.sh@63 -- # waitforlisten 63717 00:05:34.918 16:20:52 accel -- common/autotest_common.sh@829 -- # '[' -z 63717 ']' 00:05:34.918 16:20:52 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.918 16:20:52 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.918 16:20:52 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.918 16:20:52 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:34.918 16:20:52 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.918 16:20:52 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:34.918 16:20:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.918 16:20:52 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.918 16:20:52 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.918 16:20:52 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.918 16:20:52 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.918 16:20:52 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.918 16:20:52 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:34.918 16:20:52 accel -- accel/accel.sh@41 -- # jq -r . 00:05:34.918 [2024-07-21 16:20:53.037521] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:34.918 [2024-07-21 16:20:53.037615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63717 ] 00:05:35.175 [2024-07-21 16:20:53.168041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.175 [2024-07-21 16:20:53.284556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.106 16:20:54 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.106 16:20:54 accel -- common/autotest_common.sh@862 -- # return 0 00:05:36.106 16:20:54 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:36.106 16:20:54 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:36.106 16:20:54 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:36.106 16:20:54 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:36.106 16:20:54 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:36.106 16:20:54 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:36.106 16:20:54 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.106 16:20:54 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:36.106 16:20:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.106 16:20:54 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.106 16:20:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.106 16:20:54 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.106 16:20:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.106 16:20:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.106 16:20:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.106 16:20:54 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.106 16:20:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.106 16:20:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.106 16:20:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.106 16:20:54 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.106 16:20:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.106 16:20:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.106 16:20:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.106 16:20:54 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.106 16:20:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.106 16:20:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.106 16:20:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.106 16:20:54 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.106 16:20:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.106 16:20:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.106 16:20:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.106 16:20:54 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.106 16:20:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.106 16:20:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.106 16:20:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.106 16:20:54 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.106 16:20:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.106 16:20:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.106 16:20:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.106 16:20:54 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.106 16:20:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.106 16:20:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.106 16:20:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.106 16:20:54 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.106 16:20:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.106 16:20:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.106 16:20:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.106 16:20:54 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.106 16:20:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.106 16:20:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.106 16:20:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.106 16:20:54 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.106 16:20:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.106 16:20:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.106 16:20:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.106 16:20:54 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.106 16:20:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.106 16:20:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.106 16:20:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.106 16:20:54 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.106 16:20:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.106 16:20:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.106 16:20:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.106 16:20:54 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.107 16:20:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.107 16:20:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.107 16:20:54 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.107 16:20:54 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.107 16:20:54 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.107 16:20:54 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.107 16:20:54 accel -- accel/accel.sh@75 -- # killprocess 63717 00:05:36.107 16:20:54 accel -- common/autotest_common.sh@948 -- # '[' -z 63717 ']' 00:05:36.107 16:20:54 accel -- common/autotest_common.sh@952 -- # kill -0 63717 00:05:36.107 16:20:54 accel -- common/autotest_common.sh@953 -- # uname 00:05:36.107 16:20:54 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:36.107 16:20:54 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63717 00:05:36.107 16:20:54 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:36.107 16:20:54 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:36.107 killing process with pid 63717 00:05:36.107 16:20:54 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63717' 00:05:36.107 16:20:54 accel -- common/autotest_common.sh@967 -- # kill 63717 00:05:36.107 16:20:54 accel -- common/autotest_common.sh@972 -- # wait 63717 00:05:36.673 16:20:54 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:36.673 16:20:54 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:36.673 16:20:54 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:36.673 16:20:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.673 16:20:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.673 16:20:54 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:36.673 16:20:54 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:36.673 16:20:54 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:36.673 16:20:54 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.673 16:20:54 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.673 16:20:54 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.673 16:20:54 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.673 16:20:54 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.673 16:20:54 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:36.673 16:20:54 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:36.673 16:20:54 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.673 16:20:54 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:36.673 16:20:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:36.673 16:20:54 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:36.673 16:20:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:36.673 16:20:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.673 16:20:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.673 ************************************ 00:05:36.673 START TEST accel_missing_filename 00:05:36.673 ************************************ 00:05:36.673 16:20:54 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:36.673 16:20:54 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:36.673 16:20:54 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:36.673 16:20:54 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:36.673 16:20:54 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.673 16:20:54 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:36.673 16:20:54 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.673 16:20:54 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:36.673 16:20:54 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:36.673 16:20:54 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:36.673 16:20:54 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.673 16:20:54 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.673 16:20:54 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.674 16:20:54 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.674 16:20:54 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.674 16:20:54 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:36.674 16:20:54 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:36.674 [2024-07-21 16:20:54.694624] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:36.674 [2024-07-21 16:20:54.694745] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63786 ] 00:05:36.674 [2024-07-21 16:20:54.832327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.932 [2024-07-21 16:20:54.950270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.932 [2024-07-21 16:20:55.011747] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:36.932 [2024-07-21 16:20:55.089180] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:37.191 A filename is required. 00:05:37.191 16:20:55 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:37.191 16:20:55 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:37.191 16:20:55 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:37.191 16:20:55 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:37.191 16:20:55 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:37.191 16:20:55 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:37.191 00:05:37.191 real 0m0.517s 00:05:37.191 user 0m0.336s 00:05:37.191 sys 0m0.122s 00:05:37.191 16:20:55 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.191 ************************************ 00:05:37.191 16:20:55 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:37.191 END TEST accel_missing_filename 00:05:37.191 ************************************ 00:05:37.191 16:20:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:37.191 16:20:55 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:37.191 16:20:55 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:37.191 16:20:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.191 16:20:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.191 ************************************ 00:05:37.191 START TEST accel_compress_verify 00:05:37.191 ************************************ 00:05:37.191 16:20:55 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:37.191 16:20:55 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:37.191 16:20:55 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:37.191 16:20:55 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:37.191 16:20:55 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.191 16:20:55 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:37.191 16:20:55 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.191 16:20:55 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:37.191 16:20:55 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:37.192 16:20:55 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:37.192 16:20:55 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.192 16:20:55 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.192 16:20:55 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.192 16:20:55 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.192 16:20:55 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.192 16:20:55 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:37.192 16:20:55 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:37.192 [2024-07-21 16:20:55.265689] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:37.192 [2024-07-21 16:20:55.265788] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63811 ] 00:05:37.460 [2024-07-21 16:20:55.407196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.460 [2024-07-21 16:20:55.529120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.460 [2024-07-21 16:20:55.587679] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:37.736 [2024-07-21 16:20:55.665849] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:37.736 00:05:37.736 Compression does not support the verify option, aborting. 00:05:37.736 16:20:55 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:37.736 16:20:55 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:37.736 16:20:55 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:37.736 16:20:55 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:37.736 16:20:55 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:37.736 16:20:55 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:37.736 00:05:37.736 real 0m0.518s 00:05:37.736 user 0m0.354s 00:05:37.736 sys 0m0.111s 00:05:37.736 16:20:55 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.736 16:20:55 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:37.736 ************************************ 00:05:37.736 END TEST accel_compress_verify 00:05:37.736 ************************************ 00:05:37.736 16:20:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:37.736 16:20:55 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:37.736 16:20:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:37.736 16:20:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.736 16:20:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.736 ************************************ 00:05:37.736 START TEST accel_wrong_workload 00:05:37.736 ************************************ 00:05:37.736 16:20:55 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:37.736 16:20:55 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:37.736 16:20:55 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:37.736 16:20:55 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:37.736 16:20:55 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.736 16:20:55 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:37.736 16:20:55 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.736 16:20:55 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:37.736 16:20:55 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:37.736 16:20:55 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:37.736 16:20:55 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.736 16:20:55 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.736 16:20:55 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.736 16:20:55 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.736 16:20:55 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.736 16:20:55 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:37.736 16:20:55 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:37.736 Unsupported workload type: foobar 00:05:37.736 [2024-07-21 16:20:55.830531] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:37.736 accel_perf options: 00:05:37.736 [-h help message] 00:05:37.736 [-q queue depth per core] 00:05:37.736 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:37.736 [-T number of threads per core 00:05:37.736 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:37.736 [-t time in seconds] 00:05:37.736 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:37.736 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:37.736 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:37.736 [-l for compress/decompress workloads, name of uncompressed input file 00:05:37.736 [-S for crc32c workload, use this seed value (default 0) 00:05:37.736 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:37.736 [-f for fill workload, use this BYTE value (default 255) 00:05:37.736 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:37.736 [-y verify result if this switch is on] 00:05:37.736 [-a tasks to allocate per core (default: same value as -q)] 00:05:37.736 Can be used to spread operations across a wider range of memory. 00:05:37.736 16:20:55 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:37.736 16:20:55 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:37.736 16:20:55 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:37.736 16:20:55 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:37.736 00:05:37.736 real 0m0.031s 00:05:37.736 user 0m0.014s 00:05:37.736 sys 0m0.017s 00:05:37.736 16:20:55 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.736 16:20:55 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:37.736 ************************************ 00:05:37.736 END TEST accel_wrong_workload 00:05:37.736 ************************************ 00:05:37.736 16:20:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:37.736 16:20:55 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:37.736 16:20:55 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:37.736 16:20:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.736 16:20:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.736 ************************************ 00:05:37.736 START TEST accel_negative_buffers 00:05:37.736 ************************************ 00:05:37.736 16:20:55 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:37.736 16:20:55 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:37.736 16:20:55 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:37.736 16:20:55 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:37.736 16:20:55 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.736 16:20:55 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:37.736 16:20:55 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.736 16:20:55 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:37.736 16:20:55 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:37.736 16:20:55 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:37.736 16:20:55 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.736 16:20:55 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.736 16:20:55 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.736 16:20:55 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.736 16:20:55 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.736 16:20:55 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:37.736 16:20:55 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:37.736 -x option must be non-negative. 00:05:37.736 [2024-07-21 16:20:55.913854] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:37.736 accel_perf options: 00:05:37.736 [-h help message] 00:05:37.736 [-q queue depth per core] 00:05:37.736 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:37.736 [-T number of threads per core 00:05:37.736 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:37.736 [-t time in seconds] 00:05:37.736 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:37.736 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:37.736 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:37.736 [-l for compress/decompress workloads, name of uncompressed input file 00:05:37.736 [-S for crc32c workload, use this seed value (default 0) 00:05:37.736 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:37.736 [-f for fill workload, use this BYTE value (default 255) 00:05:37.736 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:37.736 [-y verify result if this switch is on] 00:05:37.736 [-a tasks to allocate per core (default: same value as -q)] 00:05:37.736 Can be used to spread operations across a wider range of memory. 00:05:37.736 16:20:55 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:37.736 16:20:55 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:37.736 16:20:55 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:37.736 16:20:55 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:37.736 00:05:37.736 real 0m0.032s 00:05:37.736 user 0m0.017s 00:05:37.736 sys 0m0.015s 00:05:37.737 16:20:55 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.737 ************************************ 00:05:37.737 END TEST accel_negative_buffers 00:05:37.737 ************************************ 00:05:37.737 16:20:55 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:37.995 16:20:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:37.995 16:20:55 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:37.995 16:20:55 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:37.995 16:20:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.995 16:20:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.995 ************************************ 00:05:37.995 START TEST accel_crc32c 00:05:37.995 ************************************ 00:05:37.995 16:20:55 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:37.995 16:20:55 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:37.995 16:20:55 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:37.995 16:20:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.995 16:20:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.995 16:20:55 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:37.995 16:20:55 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:37.995 16:20:55 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:37.995 16:20:55 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.995 16:20:55 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.995 16:20:55 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.995 16:20:55 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.995 16:20:55 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.995 16:20:55 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:37.995 16:20:55 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:37.995 [2024-07-21 16:20:55.991900] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:37.995 [2024-07-21 16:20:55.992007] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63876 ] 00:05:37.995 [2024-07-21 16:20:56.130603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.254 [2024-07-21 16:20:56.228030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.254 16:20:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.632 16:20:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.632 16:20:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.632 16:20:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.632 16:20:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.632 16:20:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.632 16:20:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.632 16:20:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.632 16:20:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.632 16:20:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.632 16:20:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.632 16:20:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.632 16:20:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.632 16:20:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.632 16:20:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.632 16:20:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.632 16:20:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.632 16:20:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.632 16:20:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.632 16:20:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.632 16:20:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.632 16:20:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.632 16:20:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.632 16:20:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.632 16:20:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.633 16:20:57 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:39.633 16:20:57 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:39.633 16:20:57 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:39.633 00:05:39.633 real 0m1.483s 00:05:39.633 user 0m1.273s 00:05:39.633 sys 0m0.112s 00:05:39.633 16:20:57 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.633 16:20:57 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:39.633 ************************************ 00:05:39.633 END TEST accel_crc32c 00:05:39.633 ************************************ 00:05:39.633 16:20:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:39.633 16:20:57 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:39.633 16:20:57 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:39.633 16:20:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.633 16:20:57 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.633 ************************************ 00:05:39.633 START TEST accel_crc32c_C2 00:05:39.633 ************************************ 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:39.633 [2024-07-21 16:20:57.522221] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:39.633 [2024-07-21 16:20:57.522335] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63910 ] 00:05:39.633 [2024-07-21 16:20:57.656777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.633 [2024-07-21 16:20:57.749775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.633 16:20:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.019 16:20:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.019 16:20:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.019 16:20:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.019 16:20:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.019 16:20:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.019 16:20:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.019 16:20:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.019 16:20:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.019 16:20:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.019 16:20:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.019 16:20:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.019 16:20:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.019 16:20:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.019 16:20:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.019 16:20:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.019 16:20:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.019 16:20:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.019 16:20:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.019 16:20:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.019 16:20:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.019 16:20:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.019 16:20:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.019 16:20:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.019 16:20:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.019 16:20:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:41.019 16:20:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:41.019 16:20:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.019 00:05:41.019 real 0m1.482s 00:05:41.019 user 0m1.273s 00:05:41.019 sys 0m0.111s 00:05:41.019 16:20:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.019 ************************************ 00:05:41.019 END TEST accel_crc32c_C2 00:05:41.019 ************************************ 00:05:41.019 16:20:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:41.019 16:20:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:41.019 16:20:59 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:41.019 16:20:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:41.019 16:20:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.019 16:20:59 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.019 ************************************ 00:05:41.019 START TEST accel_copy 00:05:41.019 ************************************ 00:05:41.020 16:20:59 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:41.020 16:20:59 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:41.020 16:20:59 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:41.020 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.020 16:20:59 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:41.020 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.020 16:20:59 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:41.020 16:20:59 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:41.020 16:20:59 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.020 16:20:59 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.020 16:20:59 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.020 16:20:59 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.020 16:20:59 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.020 16:20:59 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:41.020 16:20:59 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:41.020 [2024-07-21 16:20:59.056503] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:41.020 [2024-07-21 16:20:59.056616] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63945 ] 00:05:41.020 [2024-07-21 16:20:59.200886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.277 [2024-07-21 16:20:59.326045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.277 16:20:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.651 16:21:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.651 16:21:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.651 16:21:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.651 16:21:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.651 16:21:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.651 16:21:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.651 16:21:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.651 16:21:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.651 16:21:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.651 16:21:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.651 16:21:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.651 16:21:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.651 16:21:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.651 16:21:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.651 16:21:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.651 16:21:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.651 16:21:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.651 16:21:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.651 16:21:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.651 16:21:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.651 16:21:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.651 16:21:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.651 16:21:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.651 16:21:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.651 16:21:00 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.651 16:21:00 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:42.651 16:21:00 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.651 00:05:42.651 real 0m1.525s 00:05:42.651 user 0m1.308s 00:05:42.651 sys 0m0.122s 00:05:42.651 16:21:00 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.651 16:21:00 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:42.651 ************************************ 00:05:42.651 END TEST accel_copy 00:05:42.651 ************************************ 00:05:42.651 16:21:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:42.651 16:21:00 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:42.651 16:21:00 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:42.651 16:21:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.651 16:21:00 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.651 ************************************ 00:05:42.651 START TEST accel_fill 00:05:42.651 ************************************ 00:05:42.651 16:21:00 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:42.651 16:21:00 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:42.651 16:21:00 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:42.651 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.651 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.651 16:21:00 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:42.651 16:21:00 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:42.651 16:21:00 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:42.651 16:21:00 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.651 16:21:00 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.651 16:21:00 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.651 16:21:00 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.651 16:21:00 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.651 16:21:00 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:42.651 16:21:00 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:42.651 [2024-07-21 16:21:00.625128] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:42.651 [2024-07-21 16:21:00.625203] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63979 ] 00:05:42.651 [2024-07-21 16:21:00.757999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.651 [2024-07-21 16:21:00.855105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.909 16:21:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.293 16:21:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:44.293 16:21:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.293 16:21:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.293 16:21:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.293 16:21:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:44.293 16:21:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.293 16:21:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.293 16:21:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.293 16:21:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:44.293 16:21:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.293 16:21:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.293 16:21:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.293 16:21:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:44.293 16:21:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.293 16:21:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.293 16:21:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.293 16:21:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:44.293 16:21:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.293 16:21:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.293 16:21:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.293 16:21:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:44.293 16:21:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.293 16:21:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.293 16:21:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.293 16:21:02 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.293 16:21:02 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:44.293 16:21:02 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.293 00:05:44.293 real 0m1.472s 00:05:44.293 user 0m1.268s 00:05:44.293 sys 0m0.112s 00:05:44.293 16:21:02 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.293 16:21:02 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:44.293 ************************************ 00:05:44.293 END TEST accel_fill 00:05:44.293 ************************************ 00:05:44.293 16:21:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:44.293 16:21:02 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:44.293 16:21:02 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:44.293 16:21:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.293 16:21:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.293 ************************************ 00:05:44.293 START TEST accel_copy_crc32c 00:05:44.293 ************************************ 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:44.293 [2024-07-21 16:21:02.152826] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:44.293 [2024-07-21 16:21:02.152907] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64014 ] 00:05:44.293 [2024-07-21 16:21:02.293683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.293 [2024-07-21 16:21:02.405755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.293 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.294 16:21:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.665 16:21:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.665 16:21:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.665 16:21:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.665 16:21:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.665 16:21:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.665 16:21:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.665 16:21:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.665 16:21:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.666 16:21:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.666 16:21:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.666 16:21:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.666 16:21:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.666 16:21:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.666 16:21:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.666 16:21:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.666 16:21:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.666 16:21:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.666 16:21:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.666 16:21:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.666 16:21:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.666 16:21:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.666 16:21:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.666 16:21:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.666 16:21:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.666 16:21:03 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.666 16:21:03 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:45.666 16:21:03 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.666 00:05:45.666 real 0m1.504s 00:05:45.666 user 0m1.279s 00:05:45.666 sys 0m0.130s 00:05:45.666 16:21:03 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.666 ************************************ 00:05:45.666 END TEST accel_copy_crc32c 00:05:45.666 ************************************ 00:05:45.666 16:21:03 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:45.666 16:21:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:45.666 16:21:03 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:45.666 16:21:03 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:45.666 16:21:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.666 16:21:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.666 ************************************ 00:05:45.666 START TEST accel_copy_crc32c_C2 00:05:45.666 ************************************ 00:05:45.666 16:21:03 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:45.666 16:21:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:45.666 16:21:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:45.666 16:21:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.666 16:21:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.666 16:21:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:45.666 16:21:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:45.666 16:21:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:45.666 16:21:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.666 16:21:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.666 16:21:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.666 16:21:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.666 16:21:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.666 16:21:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:45.666 16:21:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:45.666 [2024-07-21 16:21:03.711975] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:45.666 [2024-07-21 16:21:03.712067] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64048 ] 00:05:45.666 [2024-07-21 16:21:03.844488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.923 [2024-07-21 16:21:03.954799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.923 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.923 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.923 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.923 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.923 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.923 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.923 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.923 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.923 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:45.923 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.923 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.923 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.923 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.923 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.923 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.923 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.923 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.923 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.923 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.923 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.923 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:45.923 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.923 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:45.923 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.923 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.923 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:45.923 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.923 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.923 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.924 16:21:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.298 16:21:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.298 16:21:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.298 16:21:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.298 16:21:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.298 16:21:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.298 16:21:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.298 16:21:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.298 16:21:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.298 16:21:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.298 16:21:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.298 16:21:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.298 16:21:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.298 16:21:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.298 16:21:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.298 16:21:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.298 16:21:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.298 16:21:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.298 16:21:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.298 16:21:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.298 16:21:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.298 ************************************ 00:05:47.298 END TEST accel_copy_crc32c_C2 00:05:47.298 ************************************ 00:05:47.298 16:21:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.298 16:21:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.298 16:21:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.298 16:21:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.298 16:21:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.298 16:21:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:47.298 16:21:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.298 00:05:47.298 real 0m1.501s 00:05:47.298 user 0m1.301s 00:05:47.298 sys 0m0.107s 00:05:47.298 16:21:05 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.298 16:21:05 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:47.298 16:21:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:47.298 16:21:05 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:47.298 16:21:05 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:47.298 16:21:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.298 16:21:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.298 ************************************ 00:05:47.298 START TEST accel_dualcast 00:05:47.298 ************************************ 00:05:47.298 16:21:05 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:47.298 16:21:05 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:47.298 16:21:05 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:47.298 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.298 16:21:05 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:47.298 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.298 16:21:05 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:47.299 16:21:05 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:47.299 16:21:05 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.299 16:21:05 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.299 16:21:05 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.299 16:21:05 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.299 16:21:05 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.299 16:21:05 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:47.299 16:21:05 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:47.299 [2024-07-21 16:21:05.261155] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:47.299 [2024-07-21 16:21:05.261246] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64083 ] 00:05:47.299 [2024-07-21 16:21:05.401919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.557 [2024-07-21 16:21:05.526043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.557 16:21:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:47.558 16:21:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.558 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.558 16:21:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.935 16:21:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:48.935 16:21:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:48.935 16:21:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:48.935 16:21:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.935 16:21:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:48.935 16:21:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:48.935 16:21:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:48.935 16:21:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.935 16:21:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:48.935 16:21:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:48.935 16:21:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:48.935 16:21:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.935 16:21:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:48.935 16:21:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:48.935 16:21:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:48.935 16:21:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.935 16:21:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:48.935 16:21:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:48.935 16:21:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:48.935 16:21:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.935 16:21:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:48.935 16:21:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:48.935 16:21:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:48.935 16:21:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.935 16:21:06 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.935 16:21:06 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:48.935 16:21:06 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.935 00:05:48.935 real 0m1.535s 00:05:48.935 user 0m1.316s 00:05:48.935 sys 0m0.121s 00:05:48.935 16:21:06 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.935 ************************************ 00:05:48.935 END TEST accel_dualcast 00:05:48.935 16:21:06 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:48.935 ************************************ 00:05:48.935 16:21:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:48.935 16:21:06 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:48.935 16:21:06 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:48.935 16:21:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.935 16:21:06 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.935 ************************************ 00:05:48.935 START TEST accel_compare 00:05:48.935 ************************************ 00:05:48.935 16:21:06 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:48.935 16:21:06 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:48.935 16:21:06 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:48.935 16:21:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:48.935 16:21:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:48.935 16:21:06 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:48.935 16:21:06 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:48.935 16:21:06 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:48.935 16:21:06 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.935 16:21:06 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.935 16:21:06 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.935 16:21:06 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.935 16:21:06 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.935 16:21:06 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:48.935 16:21:06 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:48.935 [2024-07-21 16:21:06.847294] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:48.935 [2024-07-21 16:21:06.847413] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64117 ] 00:05:48.935 [2024-07-21 16:21:06.986835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.935 [2024-07-21 16:21:07.116988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:49.194 16:21:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.576 16:21:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:50.576 16:21:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.576 16:21:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.576 16:21:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.576 16:21:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:50.576 16:21:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.576 16:21:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.576 16:21:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.576 16:21:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:50.576 16:21:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.576 16:21:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.576 16:21:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.576 16:21:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:50.576 16:21:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.576 16:21:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.576 16:21:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.576 16:21:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:50.576 16:21:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.576 16:21:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.576 16:21:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.576 16:21:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:50.576 ************************************ 00:05:50.576 END TEST accel_compare 00:05:50.576 ************************************ 00:05:50.576 16:21:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.576 16:21:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.576 16:21:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.576 16:21:08 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.576 16:21:08 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:50.576 16:21:08 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.576 00:05:50.576 real 0m1.545s 00:05:50.576 user 0m1.322s 00:05:50.576 sys 0m0.124s 00:05:50.576 16:21:08 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.576 16:21:08 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:50.576 16:21:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:50.576 16:21:08 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:50.576 16:21:08 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:50.576 16:21:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.576 16:21:08 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.576 ************************************ 00:05:50.576 START TEST accel_xor 00:05:50.576 ************************************ 00:05:50.576 16:21:08 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:50.576 [2024-07-21 16:21:08.444548] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:50.576 [2024-07-21 16:21:08.444630] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64152 ] 00:05:50.576 [2024-07-21 16:21:08.576429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.576 [2024-07-21 16:21:08.710671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.576 16:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:50.836 16:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.836 16:21:08 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:50.836 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.836 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.836 16:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:50.836 16:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.836 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.836 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.836 16:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:50.836 16:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.836 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.836 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.836 16:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:50.836 16:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.836 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.836 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.836 16:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.836 16:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.836 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.836 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.836 16:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:50.836 16:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.836 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.837 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.837 16:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.837 16:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.837 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.837 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.837 16:21:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.837 16:21:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.837 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.837 16:21:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.773 16:21:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.773 16:21:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.773 16:21:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.773 16:21:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.773 16:21:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.773 16:21:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.773 16:21:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.773 16:21:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.773 16:21:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.773 16:21:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.773 16:21:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.773 16:21:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.773 16:21:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.773 16:21:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.773 16:21:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.773 16:21:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.773 16:21:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.773 16:21:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.773 16:21:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.773 16:21:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.773 16:21:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.773 16:21:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.773 16:21:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.773 16:21:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.773 16:21:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.773 16:21:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:51.773 16:21:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.773 00:05:51.773 real 0m1.520s 00:05:51.773 user 0m1.300s 00:05:51.773 sys 0m0.123s 00:05:51.773 ************************************ 00:05:51.773 END TEST accel_xor 00:05:51.773 ************************************ 00:05:51.773 16:21:09 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.773 16:21:09 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:52.032 16:21:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:52.032 16:21:09 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:52.032 16:21:09 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:52.032 16:21:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.032 16:21:09 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.032 ************************************ 00:05:52.032 START TEST accel_xor 00:05:52.032 ************************************ 00:05:52.032 16:21:09 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:52.032 16:21:09 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:52.032 16:21:09 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:52.032 16:21:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.032 16:21:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.032 16:21:09 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:52.032 16:21:09 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:52.032 16:21:09 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:52.032 16:21:09 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.032 16:21:09 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.032 16:21:09 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.032 16:21:09 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.032 16:21:09 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.032 16:21:09 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:52.033 16:21:09 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:52.033 [2024-07-21 16:21:10.017292] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:52.033 [2024-07-21 16:21:10.017402] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64186 ] 00:05:52.033 [2024-07-21 16:21:10.158640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.292 [2024-07-21 16:21:10.299831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:52.292 16:21:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.665 16:21:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.665 16:21:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.665 16:21:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.665 16:21:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.665 16:21:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.665 16:21:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.665 16:21:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.665 16:21:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.665 16:21:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.665 16:21:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.665 16:21:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.665 16:21:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.665 16:21:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.665 ************************************ 00:05:53.665 END TEST accel_xor 00:05:53.665 ************************************ 00:05:53.665 16:21:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.665 16:21:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.665 16:21:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.665 16:21:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.665 16:21:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.665 16:21:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.665 16:21:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.665 16:21:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.665 16:21:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.665 16:21:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.665 16:21:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.665 16:21:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.665 16:21:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:53.665 16:21:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.665 00:05:53.665 real 0m1.537s 00:05:53.665 user 0m1.309s 00:05:53.665 sys 0m0.132s 00:05:53.665 16:21:11 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.665 16:21:11 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:53.665 16:21:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:53.665 16:21:11 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:53.665 16:21:11 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:53.665 16:21:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.665 16:21:11 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.665 ************************************ 00:05:53.665 START TEST accel_dif_verify 00:05:53.665 ************************************ 00:05:53.665 16:21:11 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:53.665 16:21:11 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:53.665 16:21:11 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:53.665 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.665 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.665 16:21:11 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:53.665 16:21:11 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:53.665 16:21:11 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:53.665 16:21:11 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.665 16:21:11 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.665 16:21:11 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.665 16:21:11 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.665 16:21:11 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.665 16:21:11 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:53.665 16:21:11 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:53.665 [2024-07-21 16:21:11.611081] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:53.665 [2024-07-21 16:21:11.611165] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64223 ] 00:05:53.665 [2024-07-21 16:21:11.745451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.665 [2024-07-21 16:21:11.850951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.923 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.924 16:21:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.297 16:21:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:55.297 16:21:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.297 16:21:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.297 16:21:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.297 16:21:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:55.297 16:21:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.297 16:21:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.297 16:21:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.297 16:21:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:55.297 16:21:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.297 16:21:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.297 16:21:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.297 16:21:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:55.297 16:21:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.297 16:21:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.297 16:21:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.297 16:21:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:55.297 16:21:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.297 16:21:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.297 16:21:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.297 16:21:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:55.297 16:21:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:55.297 16:21:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:55.297 16:21:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:55.297 16:21:13 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.297 16:21:13 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:55.297 16:21:13 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.297 00:05:55.297 real 0m1.512s 00:05:55.297 user 0m1.295s 00:05:55.297 sys 0m0.122s 00:05:55.297 16:21:13 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.297 ************************************ 00:05:55.297 END TEST accel_dif_verify 00:05:55.297 ************************************ 00:05:55.297 16:21:13 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:55.297 16:21:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:55.297 16:21:13 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:55.297 16:21:13 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:55.297 16:21:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.297 16:21:13 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.297 ************************************ 00:05:55.297 START TEST accel_dif_generate 00:05:55.297 ************************************ 00:05:55.297 16:21:13 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:55.297 [2024-07-21 16:21:13.172546] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:55.297 [2024-07-21 16:21:13.172660] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64262 ] 00:05:55.297 [2024-07-21 16:21:13.315357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.297 [2024-07-21 16:21:13.436890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:55.297 16:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:55.579 16:21:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.513 16:21:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:56.513 16:21:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.513 16:21:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.513 16:21:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.513 16:21:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:56.513 16:21:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.513 16:21:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.513 16:21:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.513 16:21:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:56.513 16:21:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.513 16:21:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.513 16:21:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.513 16:21:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:56.513 16:21:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.513 16:21:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.513 16:21:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.513 16:21:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:56.513 16:21:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.513 16:21:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.513 16:21:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.513 16:21:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:56.513 16:21:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.513 16:21:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.513 16:21:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.513 16:21:14 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.513 16:21:14 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:56.513 16:21:14 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.513 00:05:56.513 real 0m1.523s 00:05:56.513 user 0m0.017s 00:05:56.513 sys 0m0.002s 00:05:56.513 16:21:14 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.513 ************************************ 00:05:56.513 END TEST accel_dif_generate 00:05:56.513 ************************************ 00:05:56.513 16:21:14 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:56.513 16:21:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:56.513 16:21:14 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:56.513 16:21:14 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:56.513 16:21:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.513 16:21:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.771 ************************************ 00:05:56.771 START TEST accel_dif_generate_copy 00:05:56.771 ************************************ 00:05:56.771 16:21:14 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:56.771 16:21:14 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:56.771 16:21:14 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:56.771 16:21:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.771 16:21:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.771 16:21:14 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:56.771 16:21:14 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:56.771 16:21:14 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:56.771 16:21:14 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.771 16:21:14 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.771 16:21:14 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.771 16:21:14 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.771 16:21:14 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.771 16:21:14 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:56.771 16:21:14 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:56.771 [2024-07-21 16:21:14.750036] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:56.771 [2024-07-21 16:21:14.750146] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64292 ] 00:05:56.771 [2024-07-21 16:21:14.886535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.030 [2024-07-21 16:21:15.008983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.030 16:21:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.405 16:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:58.405 16:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.405 16:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.405 16:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.405 16:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:58.405 16:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.405 16:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.405 16:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.405 16:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:58.405 16:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.405 16:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.405 16:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.405 16:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:58.405 16:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.405 16:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.405 16:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.405 16:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:58.405 16:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.405 16:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.405 16:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.405 16:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:58.405 16:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.405 16:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.405 16:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.405 ************************************ 00:05:58.405 END TEST accel_dif_generate_copy 00:05:58.405 ************************************ 00:05:58.405 16:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.406 16:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:58.406 16:21:16 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.406 00:05:58.406 real 0m1.511s 00:05:58.406 user 0m0.011s 00:05:58.406 sys 0m0.002s 00:05:58.406 16:21:16 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.406 16:21:16 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:58.406 16:21:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:58.406 16:21:16 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:58.406 16:21:16 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:58.406 16:21:16 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:58.406 16:21:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.406 16:21:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.406 ************************************ 00:05:58.406 START TEST accel_comp 00:05:58.406 ************************************ 00:05:58.406 16:21:16 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:58.406 [2024-07-21 16:21:16.303953] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:58.406 [2024-07-21 16:21:16.304079] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64332 ] 00:05:58.406 [2024-07-21 16:21:16.442926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.406 [2024-07-21 16:21:16.547219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:58.406 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:58.665 16:21:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.617 16:21:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.617 16:21:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.617 16:21:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.617 16:21:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.617 16:21:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.617 16:21:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.617 16:21:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.617 16:21:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.617 16:21:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.617 16:21:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.617 16:21:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.617 16:21:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.617 16:21:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.617 16:21:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.617 16:21:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.617 16:21:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.617 16:21:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.617 16:21:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.617 16:21:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.617 16:21:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.617 16:21:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.617 16:21:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.617 16:21:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.617 16:21:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.617 16:21:17 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.617 16:21:17 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:59.617 16:21:17 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.617 00:05:59.617 real 0m1.494s 00:05:59.617 user 0m1.285s 00:05:59.617 sys 0m0.118s 00:05:59.617 16:21:17 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.617 ************************************ 00:05:59.617 END TEST accel_comp 00:05:59.617 ************************************ 00:05:59.617 16:21:17 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:59.617 16:21:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:59.617 16:21:17 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:59.617 16:21:17 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:59.617 16:21:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.617 16:21:17 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.875 ************************************ 00:05:59.875 START TEST accel_decomp 00:05:59.875 ************************************ 00:05:59.875 16:21:17 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:59.875 16:21:17 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:59.875 16:21:17 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:59.875 16:21:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:59.875 16:21:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:59.875 16:21:17 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:59.875 16:21:17 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:59.875 16:21:17 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:59.875 16:21:17 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.875 16:21:17 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.875 16:21:17 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.875 16:21:17 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.875 16:21:17 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.875 16:21:17 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:59.875 16:21:17 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:59.875 [2024-07-21 16:21:17.853673] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:05:59.875 [2024-07-21 16:21:17.853787] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64361 ] 00:05:59.876 [2024-07-21 16:21:17.993063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.134 [2024-07-21 16:21:18.103729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.134 16:21:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.536 16:21:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:01.536 16:21:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.536 16:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.536 16:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.536 16:21:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:01.536 16:21:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.536 16:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.536 16:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.536 16:21:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:01.536 16:21:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.536 16:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.536 16:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.536 16:21:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:01.536 16:21:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.536 16:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.536 16:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.536 16:21:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:01.536 16:21:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.536 16:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.536 16:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.536 16:21:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:01.536 16:21:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.536 16:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:01.536 16:21:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:01.536 16:21:19 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:01.536 16:21:19 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:01.536 16:21:19 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.536 00:06:01.536 real 0m1.500s 00:06:01.536 user 0m0.014s 00:06:01.536 sys 0m0.002s 00:06:01.536 16:21:19 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.536 ************************************ 00:06:01.536 END TEST accel_decomp 00:06:01.536 ************************************ 00:06:01.536 16:21:19 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:01.536 16:21:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:01.536 16:21:19 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:01.536 16:21:19 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:01.536 16:21:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.536 16:21:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.536 ************************************ 00:06:01.536 START TEST accel_decomp_full 00:06:01.536 ************************************ 00:06:01.536 16:21:19 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:01.536 16:21:19 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:01.536 16:21:19 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:01.536 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.536 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.536 16:21:19 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:01.536 16:21:19 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:01.536 16:21:19 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:01.536 16:21:19 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.536 16:21:19 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.536 16:21:19 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.536 16:21:19 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.536 16:21:19 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:01.537 [2024-07-21 16:21:19.403237] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:06:01.537 [2024-07-21 16:21:19.403899] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64397 ] 00:06:01.537 [2024-07-21 16:21:19.544887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.537 [2024-07-21 16:21:19.666462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.537 16:21:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.824 16:21:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.758 16:21:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.758 16:21:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.758 16:21:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.758 16:21:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.758 16:21:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.758 16:21:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.758 16:21:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.758 16:21:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.758 16:21:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.758 16:21:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.758 16:21:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.758 16:21:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.758 16:21:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.758 16:21:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.758 16:21:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.758 16:21:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.758 16:21:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.758 16:21:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.758 16:21:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.758 16:21:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.758 16:21:20 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.758 16:21:20 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.758 16:21:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.758 16:21:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.758 16:21:20 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.758 16:21:20 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:02.758 16:21:20 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.758 00:06:02.758 real 0m1.527s 00:06:02.758 user 0m1.316s 00:06:02.758 sys 0m0.117s 00:06:02.758 16:21:20 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.758 ************************************ 00:06:02.758 END TEST accel_decomp_full 00:06:02.758 ************************************ 00:06:02.758 16:21:20 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:02.758 16:21:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:02.758 16:21:20 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:02.758 16:21:20 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:02.758 16:21:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.758 16:21:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.758 ************************************ 00:06:02.758 START TEST accel_decomp_mcore 00:06:02.758 ************************************ 00:06:02.758 16:21:20 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:02.758 16:21:20 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:02.758 16:21:20 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:02.758 16:21:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.758 16:21:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.758 16:21:20 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:02.758 16:21:20 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:02.758 16:21:20 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:02.758 16:21:20 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.758 16:21:20 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.758 16:21:20 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.758 16:21:20 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.758 16:21:20 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.758 16:21:20 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:02.758 16:21:20 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:03.044 [2024-07-21 16:21:20.983433] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:06:03.044 [2024-07-21 16:21:20.983563] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64431 ] 00:06:03.044 [2024-07-21 16:21:21.123494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.302 [2024-07-21 16:21:21.243462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.302 [2024-07-21 16:21:21.243599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.302 [2024-07-21 16:21:21.243762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.302 [2024-07-21 16:21:21.243763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.302 16:21:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.676 00:06:04.676 real 0m1.539s 00:06:04.676 user 0m4.730s 00:06:04.676 sys 0m0.143s 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.676 16:21:22 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:04.676 ************************************ 00:06:04.676 END TEST accel_decomp_mcore 00:06:04.676 ************************************ 00:06:04.676 16:21:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:04.676 16:21:22 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:04.676 16:21:22 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:04.676 16:21:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.676 16:21:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.676 ************************************ 00:06:04.676 START TEST accel_decomp_full_mcore 00:06:04.676 ************************************ 00:06:04.676 16:21:22 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:04.676 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:04.676 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:04.676 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.676 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.676 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:04.677 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:04.677 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:04.677 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.677 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.677 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.677 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.677 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.677 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:04.677 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:04.677 [2024-07-21 16:21:22.572038] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:06:04.677 [2024-07-21 16:21:22.572126] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64475 ] 00:06:04.677 [2024-07-21 16:21:22.710660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:04.677 [2024-07-21 16:21:22.835552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.677 [2024-07-21 16:21:22.835662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.677 [2024-07-21 16:21:22.835748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.677 [2024-07-21 16:21:22.835748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.934 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:04.934 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.934 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.934 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.934 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:04.934 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.934 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.934 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.934 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:04.934 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.934 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.934 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.934 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:04.934 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.934 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.934 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.934 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:04.934 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.934 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.934 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.934 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:04.934 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.934 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.934 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.934 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.935 16:21:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.307 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.307 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.307 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.307 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.307 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.307 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.307 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.307 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.307 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.307 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.307 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.307 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.307 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.307 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.307 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.307 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.307 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.307 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.307 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.307 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.307 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.307 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.307 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.307 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.307 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.308 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.308 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.308 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.308 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.308 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.308 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.308 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.308 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:06.308 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.308 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.308 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.308 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.308 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:06.308 16:21:24 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.308 00:06:06.308 real 0m1.562s 00:06:06.308 user 0m4.811s 00:06:06.308 sys 0m0.132s 00:06:06.308 16:21:24 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.308 16:21:24 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:06.308 ************************************ 00:06:06.308 END TEST accel_decomp_full_mcore 00:06:06.308 ************************************ 00:06:06.308 16:21:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.308 16:21:24 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:06.308 16:21:24 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:06.308 16:21:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.308 16:21:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.308 ************************************ 00:06:06.308 START TEST accel_decomp_mthread 00:06:06.308 ************************************ 00:06:06.308 16:21:24 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:06.308 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:06.308 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:06.308 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.308 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:06.308 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.308 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:06.308 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:06.308 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.308 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.308 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.308 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.308 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.308 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:06.308 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:06.308 [2024-07-21 16:21:24.191791] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:06:06.308 [2024-07-21 16:21:24.191947] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64508 ] 00:06:06.308 [2024-07-21 16:21:24.328583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.308 [2024-07-21 16:21:24.456191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.566 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.567 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:06.567 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.567 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.567 16:21:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.499 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.499 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.499 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.499 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.499 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.499 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.499 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.499 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.499 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.499 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.499 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.499 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.499 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.499 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.499 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.499 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.499 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.499 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.499 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.499 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.499 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.758 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.758 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.758 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.758 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.758 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.758 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.758 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.758 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.758 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:07.758 16:21:25 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.758 00:06:07.758 real 0m1.542s 00:06:07.758 user 0m1.316s 00:06:07.758 sys 0m0.129s 00:06:07.758 16:21:25 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.758 ************************************ 00:06:07.758 END TEST accel_decomp_mthread 00:06:07.758 ************************************ 00:06:07.758 16:21:25 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:07.758 16:21:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.758 16:21:25 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:07.758 16:21:25 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:07.758 16:21:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.758 16:21:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.758 ************************************ 00:06:07.758 START TEST accel_decomp_full_mthread 00:06:07.758 ************************************ 00:06:07.758 16:21:25 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:07.758 16:21:25 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:07.758 16:21:25 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:07.758 16:21:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.758 16:21:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.758 16:21:25 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:07.758 16:21:25 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:07.758 16:21:25 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:07.758 16:21:25 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.758 16:21:25 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.758 16:21:25 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.758 16:21:25 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.758 16:21:25 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.758 16:21:25 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:07.758 16:21:25 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:07.758 [2024-07-21 16:21:25.782664] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:06:07.758 [2024-07-21 16:21:25.782755] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64547 ] 00:06:07.758 [2024-07-21 16:21:25.922252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.016 [2024-07-21 16:21:26.039975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.016 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.017 16:21:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.392 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.393 00:06:09.393 real 0m1.531s 00:06:09.393 user 0m1.308s 00:06:09.393 sys 0m0.126s 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.393 16:21:27 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:09.393 ************************************ 00:06:09.393 END TEST accel_decomp_full_mthread 00:06:09.393 ************************************ 00:06:09.393 16:21:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:09.393 16:21:27 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:09.393 16:21:27 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:09.393 16:21:27 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:09.393 16:21:27 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.393 16:21:27 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:09.393 16:21:27 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.393 16:21:27 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.393 16:21:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.393 16:21:27 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.393 16:21:27 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.393 16:21:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.393 16:21:27 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:09.393 16:21:27 accel -- accel/accel.sh@41 -- # jq -r . 00:06:09.393 ************************************ 00:06:09.393 START TEST accel_dif_functional_tests 00:06:09.393 ************************************ 00:06:09.393 16:21:27 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:09.393 [2024-07-21 16:21:27.394131] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:06:09.393 [2024-07-21 16:21:27.394425] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64578 ] 00:06:09.393 [2024-07-21 16:21:27.530667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.652 [2024-07-21 16:21:27.618895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.652 [2024-07-21 16:21:27.619027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.652 [2024-07-21 16:21:27.619030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.652 00:06:09.652 00:06:09.652 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.652 http://cunit.sourceforge.net/ 00:06:09.652 00:06:09.652 00:06:09.652 Suite: accel_dif 00:06:09.652 Test: verify: DIF generated, GUARD check ...passed 00:06:09.652 Test: verify: DIF generated, APPTAG check ...passed 00:06:09.652 Test: verify: DIF generated, REFTAG check ...passed 00:06:09.652 Test: verify: DIF not generated, GUARD check ...[2024-07-21 16:21:27.711106] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:09.652 passed 00:06:09.652 Test: verify: DIF not generated, APPTAG check ...[2024-07-21 16:21:27.711523] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:09.652 passed 00:06:09.652 Test: verify: DIF not generated, REFTAG check ...[2024-07-21 16:21:27.711881] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:09.652 passed 00:06:09.652 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:09.652 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-21 16:21:27.712210] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:09.652 passed 00:06:09.652 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:09.652 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:09.652 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:09.652 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:06:09.652 Test: verify copy: DIF generated, GUARD check ...passed 00:06:09.652 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:09.652 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:09.652 Test: verify copy: DIF not generated, GUARD check ...[2024-07-21 16:21:27.712750] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:09.652 [2024-07-21 16:21:27.713152] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:09.652 passed 00:06:09.652 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-21 16:21:27.713550] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:09.652 passed 00:06:09.652 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-21 16:21:27.713808] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5apassed 00:06:09.652 Test: generate copy: DIF generated, GUARD check ...passed 00:06:09.652 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:09.652 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:09.652 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:09.652 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:09.652 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:09.652 Test: generate copy: iovecs-len validate ...passed 00:06:09.652 Test: generate copy: buffer alignment validate ...passed 00:06:09.652 00:06:09.652 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.652 suites 1 1 n/a 0 0 00:06:09.652 tests 26 26 26 0 0 00:06:09.652 asserts 115 115 115 0 n/a 00:06:09.652 00:06:09.652 Elapsed time = 0.008 seconds 00:06:09.652 5a 00:06:09.652 [2024-07-21 16:21:27.714339] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:09.910 ************************************ 00:06:09.910 END TEST accel_dif_functional_tests 00:06:09.910 ************************************ 00:06:09.910 00:06:09.910 real 0m0.596s 00:06:09.910 user 0m0.789s 00:06:09.910 sys 0m0.155s 00:06:09.910 16:21:27 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.910 16:21:27 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:09.910 16:21:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:09.910 ************************************ 00:06:09.910 END TEST accel 00:06:09.910 ************************************ 00:06:09.910 00:06:09.910 real 0m35.080s 00:06:09.910 user 0m36.791s 00:06:09.910 sys 0m4.085s 00:06:09.910 16:21:27 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.910 16:21:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.910 16:21:28 -- common/autotest_common.sh@1142 -- # return 0 00:06:09.910 16:21:28 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:09.910 16:21:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.910 16:21:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.910 16:21:28 -- common/autotest_common.sh@10 -- # set +x 00:06:09.910 ************************************ 00:06:09.910 START TEST accel_rpc 00:06:09.910 ************************************ 00:06:09.910 16:21:28 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:09.910 * Looking for test storage... 00:06:09.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:09.910 16:21:28 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:09.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.910 16:21:28 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=64648 00:06:09.910 16:21:28 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:09.910 16:21:28 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 64648 00:06:09.910 16:21:28 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 64648 ']' 00:06:09.910 16:21:28 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.910 16:21:28 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.910 16:21:28 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.910 16:21:28 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.910 16:21:28 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.168 [2024-07-21 16:21:28.181080] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:06:10.168 [2024-07-21 16:21:28.181184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64648 ] 00:06:10.168 [2024-07-21 16:21:28.322912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.426 [2024-07-21 16:21:28.450082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.992 16:21:29 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.992 16:21:29 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:10.992 16:21:29 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:10.992 16:21:29 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:10.992 16:21:29 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:10.992 16:21:29 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:10.992 16:21:29 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:10.992 16:21:29 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.992 16:21:29 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.992 16:21:29 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.992 ************************************ 00:06:10.992 START TEST accel_assign_opcode 00:06:10.992 ************************************ 00:06:10.992 16:21:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:10.992 16:21:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:10.992 16:21:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.992 16:21:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:10.992 [2024-07-21 16:21:29.170802] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:10.992 16:21:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.992 16:21:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:10.992 16:21:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.992 16:21:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:10.992 [2024-07-21 16:21:29.178798] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:10.992 16:21:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.992 16:21:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:10.992 16:21:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.992 16:21:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:11.249 16:21:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.249 16:21:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:11.249 16:21:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:11.249 16:21:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.249 16:21:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:11.249 16:21:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:11.249 16:21:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.508 software 00:06:11.508 00:06:11.508 real 0m0.296s 00:06:11.508 user 0m0.059s 00:06:11.508 sys 0m0.009s 00:06:11.508 16:21:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.508 ************************************ 00:06:11.508 END TEST accel_assign_opcode 00:06:11.508 ************************************ 00:06:11.508 16:21:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:11.508 16:21:29 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:11.508 16:21:29 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 64648 00:06:11.508 16:21:29 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 64648 ']' 00:06:11.508 16:21:29 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 64648 00:06:11.508 16:21:29 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:11.508 16:21:29 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:11.508 16:21:29 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64648 00:06:11.508 killing process with pid 64648 00:06:11.508 16:21:29 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:11.508 16:21:29 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:11.508 16:21:29 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64648' 00:06:11.508 16:21:29 accel_rpc -- common/autotest_common.sh@967 -- # kill 64648 00:06:11.508 16:21:29 accel_rpc -- common/autotest_common.sh@972 -- # wait 64648 00:06:11.766 00:06:11.766 real 0m1.895s 00:06:11.766 user 0m1.984s 00:06:11.766 sys 0m0.468s 00:06:11.766 16:21:29 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.766 ************************************ 00:06:11.766 END TEST accel_rpc 00:06:11.766 ************************************ 00:06:11.766 16:21:29 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.766 16:21:29 -- common/autotest_common.sh@1142 -- # return 0 00:06:11.766 16:21:29 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:11.766 16:21:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.766 16:21:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.766 16:21:29 -- common/autotest_common.sh@10 -- # set +x 00:06:11.766 ************************************ 00:06:11.766 START TEST app_cmdline 00:06:11.766 ************************************ 00:06:11.766 16:21:29 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:12.025 * Looking for test storage... 00:06:12.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:12.025 16:21:30 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:12.025 16:21:30 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=64759 00:06:12.025 16:21:30 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 64759 00:06:12.025 16:21:30 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:12.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.025 16:21:30 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 64759 ']' 00:06:12.025 16:21:30 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.025 16:21:30 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.025 16:21:30 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.025 16:21:30 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.025 16:21:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:12.025 [2024-07-21 16:21:30.119732] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:06:12.025 [2024-07-21 16:21:30.120112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64759 ] 00:06:12.283 [2024-07-21 16:21:30.258036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.283 [2024-07-21 16:21:30.386113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.216 16:21:31 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.216 16:21:31 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:13.216 16:21:31 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:13.216 { 00:06:13.216 "fields": { 00:06:13.216 "commit": "89fd17309", 00:06:13.216 "major": 24, 00:06:13.216 "minor": 9, 00:06:13.216 "patch": 0, 00:06:13.216 "suffix": "-pre" 00:06:13.216 }, 00:06:13.216 "version": "SPDK v24.09-pre git sha1 89fd17309" 00:06:13.216 } 00:06:13.216 16:21:31 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:13.216 16:21:31 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:13.216 16:21:31 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:13.216 16:21:31 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:13.216 16:21:31 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:13.216 16:21:31 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.216 16:21:31 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:13.216 16:21:31 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:13.216 16:21:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:13.473 16:21:31 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.473 16:21:31 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:13.473 16:21:31 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:13.473 16:21:31 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:13.473 16:21:31 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:13.473 16:21:31 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:13.473 16:21:31 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:13.473 16:21:31 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.473 16:21:31 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:13.473 16:21:31 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.473 16:21:31 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:13.473 16:21:31 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:13.473 16:21:31 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:13.473 16:21:31 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:13.473 16:21:31 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:13.729 2024/07/21 16:21:31 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:06:13.729 request: 00:06:13.729 { 00:06:13.729 "method": "env_dpdk_get_mem_stats", 00:06:13.729 "params": {} 00:06:13.729 } 00:06:13.729 Got JSON-RPC error response 00:06:13.729 GoRPCClient: error on JSON-RPC call 00:06:13.729 16:21:31 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:13.729 16:21:31 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:13.729 16:21:31 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:13.729 16:21:31 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:13.729 16:21:31 app_cmdline -- app/cmdline.sh@1 -- # killprocess 64759 00:06:13.729 16:21:31 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 64759 ']' 00:06:13.729 16:21:31 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 64759 00:06:13.729 16:21:31 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:13.729 16:21:31 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:13.729 16:21:31 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64759 00:06:13.729 16:21:31 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:13.729 killing process with pid 64759 00:06:13.729 16:21:31 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:13.729 16:21:31 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64759' 00:06:13.729 16:21:31 app_cmdline -- common/autotest_common.sh@967 -- # kill 64759 00:06:13.729 16:21:31 app_cmdline -- common/autotest_common.sh@972 -- # wait 64759 00:06:14.365 00:06:14.365 real 0m2.254s 00:06:14.365 user 0m2.871s 00:06:14.365 sys 0m0.508s 00:06:14.365 16:21:32 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.365 16:21:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:14.365 ************************************ 00:06:14.365 END TEST app_cmdline 00:06:14.365 ************************************ 00:06:14.365 16:21:32 -- common/autotest_common.sh@1142 -- # return 0 00:06:14.365 16:21:32 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:14.365 16:21:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.365 16:21:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.365 16:21:32 -- common/autotest_common.sh@10 -- # set +x 00:06:14.365 ************************************ 00:06:14.365 START TEST version 00:06:14.365 ************************************ 00:06:14.365 16:21:32 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:14.365 * Looking for test storage... 00:06:14.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:14.365 16:21:32 version -- app/version.sh@17 -- # get_header_version major 00:06:14.365 16:21:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:14.365 16:21:32 version -- app/version.sh@14 -- # cut -f2 00:06:14.365 16:21:32 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.365 16:21:32 version -- app/version.sh@17 -- # major=24 00:06:14.365 16:21:32 version -- app/version.sh@18 -- # get_header_version minor 00:06:14.365 16:21:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:14.366 16:21:32 version -- app/version.sh@14 -- # cut -f2 00:06:14.366 16:21:32 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.366 16:21:32 version -- app/version.sh@18 -- # minor=9 00:06:14.366 16:21:32 version -- app/version.sh@19 -- # get_header_version patch 00:06:14.366 16:21:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:14.366 16:21:32 version -- app/version.sh@14 -- # cut -f2 00:06:14.366 16:21:32 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.366 16:21:32 version -- app/version.sh@19 -- # patch=0 00:06:14.366 16:21:32 version -- app/version.sh@20 -- # get_header_version suffix 00:06:14.366 16:21:32 version -- app/version.sh@14 -- # cut -f2 00:06:14.366 16:21:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:14.366 16:21:32 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.366 16:21:32 version -- app/version.sh@20 -- # suffix=-pre 00:06:14.366 16:21:32 version -- app/version.sh@22 -- # version=24.9 00:06:14.366 16:21:32 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:14.366 16:21:32 version -- app/version.sh@28 -- # version=24.9rc0 00:06:14.366 16:21:32 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:14.366 16:21:32 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:14.366 16:21:32 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:14.366 16:21:32 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:14.366 00:06:14.366 real 0m0.151s 00:06:14.366 user 0m0.077s 00:06:14.366 sys 0m0.103s 00:06:14.366 16:21:32 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.366 16:21:32 version -- common/autotest_common.sh@10 -- # set +x 00:06:14.366 ************************************ 00:06:14.366 END TEST version 00:06:14.366 ************************************ 00:06:14.366 16:21:32 -- common/autotest_common.sh@1142 -- # return 0 00:06:14.366 16:21:32 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:14.366 16:21:32 -- spdk/autotest.sh@198 -- # uname -s 00:06:14.366 16:21:32 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:14.366 16:21:32 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:14.366 16:21:32 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:14.366 16:21:32 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:14.366 16:21:32 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:14.366 16:21:32 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:14.366 16:21:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:14.366 16:21:32 -- common/autotest_common.sh@10 -- # set +x 00:06:14.366 16:21:32 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:14.366 16:21:32 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:14.366 16:21:32 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:14.366 16:21:32 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:14.366 16:21:32 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:14.366 16:21:32 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:14.366 16:21:32 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:14.366 16:21:32 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:14.366 16:21:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.366 16:21:32 -- common/autotest_common.sh@10 -- # set +x 00:06:14.366 ************************************ 00:06:14.366 START TEST nvmf_tcp 00:06:14.366 ************************************ 00:06:14.366 16:21:32 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:14.653 * Looking for test storage... 00:06:14.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:14.653 16:21:32 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:14.653 16:21:32 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:14.653 16:21:32 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:14.653 16:21:32 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.653 16:21:32 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.653 16:21:32 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.653 16:21:32 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:14.653 16:21:32 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:14.653 16:21:32 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:14.653 16:21:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:14.653 16:21:32 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:14.653 16:21:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:14.653 16:21:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.653 16:21:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.653 ************************************ 00:06:14.653 START TEST nvmf_example 00:06:14.653 ************************************ 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:14.653 * Looking for test storage... 00:06:14.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:14.653 Cannot find device "nvmf_init_br" 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # true 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:14.653 Cannot find device "nvmf_tgt_br" 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # true 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:14.653 Cannot find device "nvmf_tgt_br2" 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # true 00:06:14.653 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:14.653 Cannot find device "nvmf_init_br" 00:06:14.654 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # true 00:06:14.654 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:14.654 Cannot find device "nvmf_tgt_br" 00:06:14.654 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # true 00:06:14.654 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:14.654 Cannot find device "nvmf_tgt_br2" 00:06:14.654 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # true 00:06:14.654 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:14.654 Cannot find device "nvmf_br" 00:06:14.654 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # true 00:06:14.654 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:14.654 Cannot find device "nvmf_init_if" 00:06:14.654 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # true 00:06:14.654 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:14.910 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:14.910 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # true 00:06:14.910 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:14.910 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:14.910 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # true 00:06:14.910 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:14.910 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:14.910 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:14.910 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:14.910 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:14.910 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:14.910 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:14.910 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:14.910 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:14.910 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:14.910 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:14.910 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:14.910 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:14.910 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:14.910 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:14.910 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:14.910 16:21:32 nvmf_tcp.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:14.910 16:21:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:14.910 16:21:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:14.911 16:21:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:14.911 16:21:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:14.911 16:21:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:14.911 16:21:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:14.911 16:21:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:14.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:14.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:06:14.911 00:06:14.911 --- 10.0.0.2 ping statistics --- 00:06:14.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:14.911 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:06:14.911 16:21:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:14.911 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:14.911 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:06:14.911 00:06:14.911 --- 10.0.0.3 ping statistics --- 00:06:14.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:14.911 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:06:14.911 16:21:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:15.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:15.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:06:15.167 00:06:15.167 --- 10.0.0.1 ping statistics --- 00:06:15.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:15.167 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:06:15.167 16:21:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:15.167 16:21:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:06:15.167 16:21:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:15.167 16:21:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:15.167 16:21:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:15.167 16:21:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:15.167 16:21:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:15.167 16:21:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:15.167 16:21:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:15.167 16:21:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:15.167 16:21:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:15.167 16:21:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:15.167 16:21:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:15.167 16:21:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:15.167 16:21:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:15.167 16:21:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=65110 00:06:15.167 16:21:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:15.167 16:21:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 65110 00:06:15.167 16:21:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:15.167 16:21:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 65110 ']' 00:06:15.167 16:21:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.167 16:21:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.167 16:21:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.167 16:21:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.167 16:21:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:16.101 16:21:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.101 16:21:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:16.101 16:21:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:16.101 16:21:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:16.101 16:21:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:16.101 16:21:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:16.101 16:21:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.101 16:21:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:16.101 16:21:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.101 16:21:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:16.101 16:21:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.101 16:21:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:16.359 16:21:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.359 16:21:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:16.359 16:21:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:16.359 16:21:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.359 16:21:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:16.359 16:21:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.359 16:21:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:16.359 16:21:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:16.359 16:21:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.359 16:21:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:16.359 16:21:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.359 16:21:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:16.359 16:21:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.359 16:21:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:16.359 16:21:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.359 16:21:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:06:16.359 16:21:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:26.376 Initializing NVMe Controllers 00:06:26.376 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:26.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:26.376 Initialization complete. Launching workers. 00:06:26.376 ======================================================== 00:06:26.376 Latency(us) 00:06:26.376 Device Information : IOPS MiB/s Average min max 00:06:26.376 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14865.45 58.07 4306.41 728.73 23117.00 00:06:26.376 ======================================================== 00:06:26.376 Total : 14865.45 58.07 4306.41 728.73 23117.00 00:06:26.376 00:06:26.634 16:21:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:26.634 16:21:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:26.634 16:21:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:26.634 16:21:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:26.634 16:21:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:26.634 16:21:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:26.634 16:21:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:26.634 16:21:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:26.634 rmmod nvme_tcp 00:06:26.634 rmmod nvme_fabrics 00:06:26.634 rmmod nvme_keyring 00:06:26.634 16:21:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:26.634 16:21:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:26.634 16:21:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:26.634 16:21:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 65110 ']' 00:06:26.634 16:21:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 65110 00:06:26.634 16:21:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 65110 ']' 00:06:26.634 16:21:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 65110 00:06:26.634 16:21:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:26.634 16:21:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:26.634 16:21:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65110 00:06:26.634 16:21:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:26.634 16:21:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:26.634 killing process with pid 65110 00:06:26.634 16:21:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65110' 00:06:26.634 16:21:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 65110 00:06:26.634 16:21:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 65110 00:06:26.893 nvmf threads initialize successfully 00:06:26.893 bdev subsystem init successfully 00:06:26.893 created a nvmf target service 00:06:26.893 create targets's poll groups done 00:06:26.893 all subsystems of target started 00:06:26.893 nvmf target is running 00:06:26.893 all subsystems of target stopped 00:06:26.893 destroy targets's poll groups done 00:06:26.893 destroyed the nvmf target service 00:06:26.893 bdev subsystem finish successfully 00:06:26.893 nvmf threads destroy successfully 00:06:26.893 16:21:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:26.893 16:21:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:26.893 16:21:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:26.893 16:21:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:26.893 16:21:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:26.893 16:21:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:26.893 16:21:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:26.893 16:21:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.893 16:21:44 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:26.893 16:21:44 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:26.893 16:21:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:26.893 16:21:44 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:26.893 00:06:26.893 real 0m12.362s 00:06:26.893 user 0m44.377s 00:06:26.893 sys 0m1.980s 00:06:26.893 16:21:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.893 ************************************ 00:06:26.893 END TEST nvmf_example 00:06:26.893 16:21:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:26.893 ************************************ 00:06:26.893 16:21:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:26.893 16:21:45 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:26.893 16:21:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:26.893 16:21:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.893 16:21:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:26.893 ************************************ 00:06:26.893 START TEST nvmf_filesystem 00:06:26.893 ************************************ 00:06:26.893 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:27.240 * Looking for test storage... 00:06:27.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:27.240 #define SPDK_CONFIG_H 00:06:27.240 #define SPDK_CONFIG_APPS 1 00:06:27.240 #define SPDK_CONFIG_ARCH native 00:06:27.240 #undef SPDK_CONFIG_ASAN 00:06:27.240 #define SPDK_CONFIG_AVAHI 1 00:06:27.240 #undef SPDK_CONFIG_CET 00:06:27.240 #define SPDK_CONFIG_COVERAGE 1 00:06:27.240 #define SPDK_CONFIG_CROSS_PREFIX 00:06:27.240 #undef SPDK_CONFIG_CRYPTO 00:06:27.240 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:27.240 #undef SPDK_CONFIG_CUSTOMOCF 00:06:27.240 #undef SPDK_CONFIG_DAOS 00:06:27.240 #define SPDK_CONFIG_DAOS_DIR 00:06:27.240 #define SPDK_CONFIG_DEBUG 1 00:06:27.240 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:27.240 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:27.240 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:27.240 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:27.240 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:27.240 #undef SPDK_CONFIG_DPDK_UADK 00:06:27.240 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:27.240 #define SPDK_CONFIG_EXAMPLES 1 00:06:27.240 #undef SPDK_CONFIG_FC 00:06:27.240 #define SPDK_CONFIG_FC_PATH 00:06:27.240 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:27.240 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:27.240 #undef SPDK_CONFIG_FUSE 00:06:27.240 #undef SPDK_CONFIG_FUZZER 00:06:27.240 #define SPDK_CONFIG_FUZZER_LIB 00:06:27.240 #define SPDK_CONFIG_GOLANG 1 00:06:27.240 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:27.240 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:27.240 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:27.240 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:27.240 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:27.240 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:27.240 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:27.240 #define SPDK_CONFIG_IDXD 1 00:06:27.240 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:27.240 #undef SPDK_CONFIG_IPSEC_MB 00:06:27.240 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:27.240 #define SPDK_CONFIG_ISAL 1 00:06:27.240 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:27.240 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:27.240 #define SPDK_CONFIG_LIBDIR 00:06:27.240 #undef SPDK_CONFIG_LTO 00:06:27.240 #define SPDK_CONFIG_MAX_LCORES 128 00:06:27.240 #define SPDK_CONFIG_NVME_CUSE 1 00:06:27.240 #undef SPDK_CONFIG_OCF 00:06:27.240 #define SPDK_CONFIG_OCF_PATH 00:06:27.240 #define SPDK_CONFIG_OPENSSL_PATH 00:06:27.240 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:27.240 #define SPDK_CONFIG_PGO_DIR 00:06:27.240 #undef SPDK_CONFIG_PGO_USE 00:06:27.240 #define SPDK_CONFIG_PREFIX /usr/local 00:06:27.240 #undef SPDK_CONFIG_RAID5F 00:06:27.240 #undef SPDK_CONFIG_RBD 00:06:27.240 #define SPDK_CONFIG_RDMA 1 00:06:27.240 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:27.240 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:27.240 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:27.240 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:27.240 #define SPDK_CONFIG_SHARED 1 00:06:27.240 #undef SPDK_CONFIG_SMA 00:06:27.240 #define SPDK_CONFIG_TESTS 1 00:06:27.240 #undef SPDK_CONFIG_TSAN 00:06:27.240 #define SPDK_CONFIG_UBLK 1 00:06:27.240 #define SPDK_CONFIG_UBSAN 1 00:06:27.240 #undef SPDK_CONFIG_UNIT_TESTS 00:06:27.240 #undef SPDK_CONFIG_URING 00:06:27.240 #define SPDK_CONFIG_URING_PATH 00:06:27.240 #undef SPDK_CONFIG_URING_ZNS 00:06:27.240 #define SPDK_CONFIG_USDT 1 00:06:27.240 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:27.240 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:27.240 #undef SPDK_CONFIG_VFIO_USER 00:06:27.240 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:27.240 #define SPDK_CONFIG_VHOST 1 00:06:27.240 #define SPDK_CONFIG_VIRTIO 1 00:06:27.240 #undef SPDK_CONFIG_VTUNE 00:06:27.240 #define SPDK_CONFIG_VTUNE_DIR 00:06:27.240 #define SPDK_CONFIG_WERROR 1 00:06:27.240 #define SPDK_CONFIG_WPDK_DIR 00:06:27.240 #undef SPDK_CONFIG_XNVME 00:06:27.240 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:27.240 16:21:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 1 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:27.241 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 65359 ]] 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 65359 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.jltojz 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.jltojz/tests/target /tmp/spdk.jltojz 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6264520704 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267895808 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2494357504 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507161600 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12804096 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13769715712 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5260398592 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13769715712 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5260398592 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267760640 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267895808 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=135168 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=96358940672 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3343839232 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:27.242 * Looking for test storage... 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=13769715712 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:27.242 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:27.242 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:27.243 Cannot find device "nvmf_tgt_br" 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:27.243 Cannot find device "nvmf_tgt_br2" 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:27.243 Cannot find device "nvmf_tgt_br" 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:27.243 Cannot find device "nvmf_tgt_br2" 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:27.243 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:27.243 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:27.243 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:27.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:27.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:06:27.500 00:06:27.500 --- 10.0.0.2 ping statistics --- 00:06:27.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.500 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:27.500 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:27.500 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:06:27.500 00:06:27.500 --- 10.0.0.3 ping statistics --- 00:06:27.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.500 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:27.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:27.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:06:27.500 00:06:27.500 --- 10.0.0.1 ping statistics --- 00:06:27.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.500 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:27.500 ************************************ 00:06:27.500 START TEST nvmf_filesystem_no_in_capsule 00:06:27.500 ************************************ 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65516 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65516 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 65516 ']' 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.500 16:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.500 [2024-07-21 16:21:45.700007] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:06:27.500 [2024-07-21 16:21:45.700135] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:27.757 [2024-07-21 16:21:45.836327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:27.757 [2024-07-21 16:21:45.952839] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:27.757 [2024-07-21 16:21:45.953078] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:27.757 [2024-07-21 16:21:45.953147] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:27.757 [2024-07-21 16:21:45.953253] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:27.757 [2024-07-21 16:21:45.953370] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:27.757 [2024-07-21 16:21:45.953575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.757 [2024-07-21 16:21:45.953694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.757 [2024-07-21 16:21:45.954192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.757 [2024-07-21 16:21:45.954207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.690 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.690 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:28.690 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:28.690 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:28.690 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.690 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:28.690 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:28.690 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:28.690 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.690 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.690 [2024-07-21 16:21:46.765043] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:28.690 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.690 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:28.690 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.690 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.948 Malloc1 00:06:28.948 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.948 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:28.948 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.948 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.948 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.948 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:28.948 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.948 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.948 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.948 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:28.948 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.948 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.948 [2024-07-21 16:21:46.964499] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:28.948 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.948 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:28.948 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:28.948 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:28.948 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:28.948 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:28.948 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:28.948 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.948 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.948 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.948 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:28.948 { 00:06:28.948 "aliases": [ 00:06:28.948 "03fcd761-8c12-4cdc-ac1c-87e584f5c3dd" 00:06:28.948 ], 00:06:28.948 "assigned_rate_limits": { 00:06:28.948 "r_mbytes_per_sec": 0, 00:06:28.948 "rw_ios_per_sec": 0, 00:06:28.948 "rw_mbytes_per_sec": 0, 00:06:28.948 "w_mbytes_per_sec": 0 00:06:28.948 }, 00:06:28.948 "block_size": 512, 00:06:28.948 "claim_type": "exclusive_write", 00:06:28.948 "claimed": true, 00:06:28.948 "driver_specific": {}, 00:06:28.948 "memory_domains": [ 00:06:28.948 { 00:06:28.948 "dma_device_id": "system", 00:06:28.948 "dma_device_type": 1 00:06:28.948 }, 00:06:28.948 { 00:06:28.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:28.948 "dma_device_type": 2 00:06:28.948 } 00:06:28.948 ], 00:06:28.948 "name": "Malloc1", 00:06:28.948 "num_blocks": 1048576, 00:06:28.948 "product_name": "Malloc disk", 00:06:28.948 "supported_io_types": { 00:06:28.948 "abort": true, 00:06:28.948 "compare": false, 00:06:28.948 "compare_and_write": false, 00:06:28.948 "copy": true, 00:06:28.948 "flush": true, 00:06:28.948 "get_zone_info": false, 00:06:28.948 "nvme_admin": false, 00:06:28.948 "nvme_io": false, 00:06:28.948 "nvme_io_md": false, 00:06:28.948 "nvme_iov_md": false, 00:06:28.948 "read": true, 00:06:28.948 "reset": true, 00:06:28.948 "seek_data": false, 00:06:28.948 "seek_hole": false, 00:06:28.948 "unmap": true, 00:06:28.948 "write": true, 00:06:28.948 "write_zeroes": true, 00:06:28.948 "zcopy": true, 00:06:28.948 "zone_append": false, 00:06:28.948 "zone_management": false 00:06:28.948 }, 00:06:28.948 "uuid": "03fcd761-8c12-4cdc-ac1c-87e584f5c3dd", 00:06:28.948 "zoned": false 00:06:28.948 } 00:06:28.948 ]' 00:06:28.948 16:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:28.948 16:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:28.948 16:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:28.948 16:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:28.948 16:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:28.948 16:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:28.948 16:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:28.948 16:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:29.206 16:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:29.206 16:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:29.206 16:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:29.206 16:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:29.206 16:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:31.106 16:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:31.106 16:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:31.106 16:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:31.106 16:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:31.106 16:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:31.106 16:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:31.106 16:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:31.106 16:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:31.106 16:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:31.106 16:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:31.106 16:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:31.106 16:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:31.106 16:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:31.106 16:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:31.106 16:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:31.106 16:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:31.106 16:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:31.364 16:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:31.364 16:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:32.298 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:32.298 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:32.298 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:32.298 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.298 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:32.298 ************************************ 00:06:32.298 START TEST filesystem_ext4 00:06:32.298 ************************************ 00:06:32.298 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:32.298 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:32.298 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:32.298 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:32.298 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:32.298 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:32.298 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:32.298 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:32.298 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:32.298 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:32.298 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:32.298 mke2fs 1.46.5 (30-Dec-2021) 00:06:32.556 Discarding device blocks: 0/522240 done 00:06:32.556 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:32.556 Filesystem UUID: 54e87044-7c2f-443f-8080-57aa056d81cb 00:06:32.556 Superblock backups stored on blocks: 00:06:32.556 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:32.556 00:06:32.556 Allocating group tables: 0/64 done 00:06:32.556 Writing inode tables: 0/64 done 00:06:32.556 Creating journal (8192 blocks): done 00:06:32.556 Writing superblocks and filesystem accounting information: 0/64 done 00:06:32.556 00:06:32.556 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:32.556 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:32.556 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:32.556 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:32.814 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:32.814 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:32.814 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:32.814 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:32.814 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 65516 00:06:32.814 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:32.814 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:32.814 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:32.814 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:32.814 00:06:32.814 real 0m0.394s 00:06:32.814 user 0m0.019s 00:06:32.814 sys 0m0.059s 00:06:32.814 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.814 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:32.814 ************************************ 00:06:32.814 END TEST filesystem_ext4 00:06:32.814 ************************************ 00:06:32.814 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:32.814 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:32.814 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:32.814 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.814 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:32.814 ************************************ 00:06:32.814 START TEST filesystem_btrfs 00:06:32.814 ************************************ 00:06:32.814 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:32.814 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:32.814 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:32.814 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:32.814 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:32.814 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:32.814 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:32.814 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:32.814 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:32.814 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:32.814 16:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:32.814 btrfs-progs v6.6.2 00:06:32.814 See https://btrfs.readthedocs.io for more information. 00:06:32.814 00:06:32.814 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:32.814 NOTE: several default settings have changed in version 5.15, please make sure 00:06:32.814 this does not affect your deployments: 00:06:32.814 - DUP for metadata (-m dup) 00:06:32.814 - enabled no-holes (-O no-holes) 00:06:32.814 - enabled free-space-tree (-R free-space-tree) 00:06:32.814 00:06:32.814 Label: (null) 00:06:32.814 UUID: 77831fef-a5a4-4a71-ad80-d722c0e11a30 00:06:32.814 Node size: 16384 00:06:32.814 Sector size: 4096 00:06:32.815 Filesystem size: 510.00MiB 00:06:32.815 Block group profiles: 00:06:32.815 Data: single 8.00MiB 00:06:32.815 Metadata: DUP 32.00MiB 00:06:32.815 System: DUP 8.00MiB 00:06:32.815 SSD detected: yes 00:06:32.815 Zoned device: no 00:06:32.815 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:32.815 Runtime features: free-space-tree 00:06:32.815 Checksum: crc32c 00:06:32.815 Number of devices: 1 00:06:32.815 Devices: 00:06:32.815 ID SIZE PATH 00:06:32.815 1 510.00MiB /dev/nvme0n1p1 00:06:32.815 00:06:32.815 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:32.815 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:33.074 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:33.074 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:33.074 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:33.074 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:33.074 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:33.074 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:33.074 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 65516 00:06:33.074 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:33.074 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:33.074 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:33.074 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:33.074 00:06:33.074 real 0m0.223s 00:06:33.074 user 0m0.020s 00:06:33.074 sys 0m0.064s 00:06:33.074 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.074 ************************************ 00:06:33.074 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:33.074 END TEST filesystem_btrfs 00:06:33.074 ************************************ 00:06:33.074 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:33.074 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:33.074 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:33.074 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.074 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:33.074 ************************************ 00:06:33.074 START TEST filesystem_xfs 00:06:33.074 ************************************ 00:06:33.074 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:33.074 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:33.074 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:33.074 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:33.074 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:33.074 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:33.074 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:33.074 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:06:33.074 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:33.074 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:33.074 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:33.074 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:33.074 = sectsz=512 attr=2, projid32bit=1 00:06:33.074 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:33.074 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:33.074 data = bsize=4096 blocks=130560, imaxpct=25 00:06:33.074 = sunit=0 swidth=0 blks 00:06:33.074 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:33.074 log =internal log bsize=4096 blocks=16384, version=2 00:06:33.074 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:33.074 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:34.007 Discarding blocks...Done. 00:06:34.007 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:34.007 16:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:36.594 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:36.594 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:36.594 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:36.594 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:36.594 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:36.594 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:36.594 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 65516 00:06:36.594 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:36.594 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:36.594 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:36.594 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:36.594 00:06:36.594 real 0m3.179s 00:06:36.594 user 0m0.023s 00:06:36.594 sys 0m0.059s 00:06:36.594 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.594 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:36.594 ************************************ 00:06:36.594 END TEST filesystem_xfs 00:06:36.594 ************************************ 00:06:36.594 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:36.594 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:36.594 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:36.594 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:36.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:36.595 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:36.595 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:36.595 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:36.595 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:36.595 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:36.595 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:36.595 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:36.595 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:36.595 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.595 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:36.595 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.595 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:36.595 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 65516 00:06:36.595 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 65516 ']' 00:06:36.595 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 65516 00:06:36.595 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:36.595 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:36.595 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65516 00:06:36.595 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:36.595 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:36.595 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65516' 00:06:36.595 killing process with pid 65516 00:06:36.595 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 65516 00:06:36.595 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 65516 00:06:36.853 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:36.853 00:06:36.853 real 0m9.270s 00:06:36.853 user 0m34.871s 00:06:36.853 sys 0m1.687s 00:06:36.853 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.853 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:36.853 ************************************ 00:06:36.853 END TEST nvmf_filesystem_no_in_capsule 00:06:36.853 ************************************ 00:06:36.853 16:21:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:36.854 16:21:54 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:36.854 16:21:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:36.854 16:21:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.854 16:21:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:36.854 ************************************ 00:06:36.854 START TEST nvmf_filesystem_in_capsule 00:06:36.854 ************************************ 00:06:36.854 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:06:36.854 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:36.854 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:36.854 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:36.854 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:36.854 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:36.854 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65827 00:06:36.854 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65827 00:06:36.854 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 65827 ']' 00:06:36.854 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:36.854 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.854 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.854 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.854 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.854 16:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:36.854 [2024-07-21 16:21:55.038625] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:06:36.854 [2024-07-21 16:21:55.038719] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:37.113 [2024-07-21 16:21:55.182242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:37.113 [2024-07-21 16:21:55.308845] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:37.113 [2024-07-21 16:21:55.308906] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:37.113 [2024-07-21 16:21:55.308920] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:37.113 [2024-07-21 16:21:55.308939] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:37.113 [2024-07-21 16:21:55.308948] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:37.113 [2024-07-21 16:21:55.309114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.113 [2024-07-21 16:21:55.309388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.113 [2024-07-21 16:21:55.310244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.113 [2024-07-21 16:21:55.310278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.050 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.050 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:38.050 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:38.050 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:38.050 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:38.050 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:38.050 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:38.050 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:38.050 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.050 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:38.050 [2024-07-21 16:21:56.123114] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:38.050 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.050 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:38.050 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.050 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:38.309 Malloc1 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:38.309 [2024-07-21 16:21:56.326255] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:38.309 { 00:06:38.309 "aliases": [ 00:06:38.309 "baa5c108-5f44-42eb-a381-b4f846ca33fc" 00:06:38.309 ], 00:06:38.309 "assigned_rate_limits": { 00:06:38.309 "r_mbytes_per_sec": 0, 00:06:38.309 "rw_ios_per_sec": 0, 00:06:38.309 "rw_mbytes_per_sec": 0, 00:06:38.309 "w_mbytes_per_sec": 0 00:06:38.309 }, 00:06:38.309 "block_size": 512, 00:06:38.309 "claim_type": "exclusive_write", 00:06:38.309 "claimed": true, 00:06:38.309 "driver_specific": {}, 00:06:38.309 "memory_domains": [ 00:06:38.309 { 00:06:38.309 "dma_device_id": "system", 00:06:38.309 "dma_device_type": 1 00:06:38.309 }, 00:06:38.309 { 00:06:38.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:38.309 "dma_device_type": 2 00:06:38.309 } 00:06:38.309 ], 00:06:38.309 "name": "Malloc1", 00:06:38.309 "num_blocks": 1048576, 00:06:38.309 "product_name": "Malloc disk", 00:06:38.309 "supported_io_types": { 00:06:38.309 "abort": true, 00:06:38.309 "compare": false, 00:06:38.309 "compare_and_write": false, 00:06:38.309 "copy": true, 00:06:38.309 "flush": true, 00:06:38.309 "get_zone_info": false, 00:06:38.309 "nvme_admin": false, 00:06:38.309 "nvme_io": false, 00:06:38.309 "nvme_io_md": false, 00:06:38.309 "nvme_iov_md": false, 00:06:38.309 "read": true, 00:06:38.309 "reset": true, 00:06:38.309 "seek_data": false, 00:06:38.309 "seek_hole": false, 00:06:38.309 "unmap": true, 00:06:38.309 "write": true, 00:06:38.309 "write_zeroes": true, 00:06:38.309 "zcopy": true, 00:06:38.309 "zone_append": false, 00:06:38.309 "zone_management": false 00:06:38.309 }, 00:06:38.309 "uuid": "baa5c108-5f44-42eb-a381-b4f846ca33fc", 00:06:38.309 "zoned": false 00:06:38.309 } 00:06:38.309 ]' 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:38.309 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:38.568 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:38.568 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:38.568 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:38.568 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:38.568 16:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:40.468 16:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:40.468 16:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:40.468 16:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:40.468 16:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:40.468 16:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:40.468 16:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:40.468 16:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:40.468 16:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:40.468 16:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:40.468 16:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:40.468 16:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:40.468 16:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:40.468 16:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:40.468 16:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:40.468 16:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:40.468 16:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:40.468 16:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:40.469 16:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:40.727 16:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:41.663 16:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:41.663 16:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:41.663 16:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:41.663 16:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.663 16:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.663 ************************************ 00:06:41.663 START TEST filesystem_in_capsule_ext4 00:06:41.663 ************************************ 00:06:41.663 16:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:41.663 16:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:41.663 16:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:41.663 16:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:41.663 16:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:41.663 16:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:41.663 16:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:41.663 16:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:41.663 16:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:41.663 16:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:41.663 16:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:41.663 mke2fs 1.46.5 (30-Dec-2021) 00:06:41.663 Discarding device blocks: 0/522240 done 00:06:41.663 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:41.663 Filesystem UUID: 781ed603-de7f-407a-91e8-9fd1b8e6c798 00:06:41.663 Superblock backups stored on blocks: 00:06:41.663 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:41.663 00:06:41.663 Allocating group tables: 0/64 done 00:06:41.663 Writing inode tables: 0/64 done 00:06:41.921 Creating journal (8192 blocks): done 00:06:41.921 Writing superblocks and filesystem accounting information: 0/64 done 00:06:41.921 00:06:41.921 16:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:41.921 16:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:41.921 16:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:41.921 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:42.179 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:42.179 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:42.179 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:42.179 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:42.179 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 65827 00:06:42.179 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:42.179 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:42.179 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:42.179 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:42.179 00:06:42.179 real 0m0.406s 00:06:42.179 user 0m0.024s 00:06:42.179 sys 0m0.056s 00:06:42.179 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.179 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:42.179 ************************************ 00:06:42.179 END TEST filesystem_in_capsule_ext4 00:06:42.179 ************************************ 00:06:42.179 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:42.179 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:42.179 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:42.179 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.179 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:42.179 ************************************ 00:06:42.179 START TEST filesystem_in_capsule_btrfs 00:06:42.179 ************************************ 00:06:42.179 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:42.179 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:42.179 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:42.179 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:42.179 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:42.179 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:42.179 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:42.179 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:42.179 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:42.179 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:42.179 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:42.179 btrfs-progs v6.6.2 00:06:42.179 See https://btrfs.readthedocs.io for more information. 00:06:42.179 00:06:42.179 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:42.179 NOTE: several default settings have changed in version 5.15, please make sure 00:06:42.179 this does not affect your deployments: 00:06:42.179 - DUP for metadata (-m dup) 00:06:42.179 - enabled no-holes (-O no-holes) 00:06:42.179 - enabled free-space-tree (-R free-space-tree) 00:06:42.179 00:06:42.179 Label: (null) 00:06:42.179 UUID: 4851a281-ef8d-43fe-bf77-36e9277a474f 00:06:42.179 Node size: 16384 00:06:42.179 Sector size: 4096 00:06:42.179 Filesystem size: 510.00MiB 00:06:42.179 Block group profiles: 00:06:42.179 Data: single 8.00MiB 00:06:42.179 Metadata: DUP 32.00MiB 00:06:42.179 System: DUP 8.00MiB 00:06:42.179 SSD detected: yes 00:06:42.179 Zoned device: no 00:06:42.179 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:42.179 Runtime features: free-space-tree 00:06:42.179 Checksum: crc32c 00:06:42.179 Number of devices: 1 00:06:42.179 Devices: 00:06:42.179 ID SIZE PATH 00:06:42.179 1 510.00MiB /dev/nvme0n1p1 00:06:42.179 00:06:42.179 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:42.179 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:42.438 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:42.438 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:42.438 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:42.438 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:42.438 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:42.438 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:42.438 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 65827 00:06:42.438 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:42.438 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:42.438 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:42.438 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:42.438 ************************************ 00:06:42.438 END TEST filesystem_in_capsule_btrfs 00:06:42.438 ************************************ 00:06:42.438 00:06:42.438 real 0m0.227s 00:06:42.438 user 0m0.024s 00:06:42.438 sys 0m0.067s 00:06:42.438 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.438 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:42.438 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:42.438 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:42.438 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:42.438 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.438 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:42.438 ************************************ 00:06:42.438 START TEST filesystem_in_capsule_xfs 00:06:42.438 ************************************ 00:06:42.438 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:42.438 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:42.438 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:42.438 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:42.438 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:42.438 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:42.438 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:42.438 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:06:42.438 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:42.438 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:42.438 16:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:42.438 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:42.438 = sectsz=512 attr=2, projid32bit=1 00:06:42.438 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:42.438 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:42.438 data = bsize=4096 blocks=130560, imaxpct=25 00:06:42.438 = sunit=0 swidth=0 blks 00:06:42.438 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:42.438 log =internal log bsize=4096 blocks=16384, version=2 00:06:42.438 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:42.438 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:43.371 Discarding blocks...Done. 00:06:43.371 16:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:43.371 16:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 65827 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:45.267 ************************************ 00:06:45.267 END TEST filesystem_in_capsule_xfs 00:06:45.267 ************************************ 00:06:45.267 00:06:45.267 real 0m2.606s 00:06:45.267 user 0m0.028s 00:06:45.267 sys 0m0.047s 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:45.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 65827 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 65827 ']' 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 65827 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65827 00:06:45.267 killing process with pid 65827 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65827' 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 65827 00:06:45.267 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 65827 00:06:45.833 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:45.833 00:06:45.833 real 0m8.810s 00:06:45.833 user 0m33.058s 00:06:45.833 sys 0m1.673s 00:06:45.833 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.833 ************************************ 00:06:45.833 END TEST nvmf_filesystem_in_capsule 00:06:45.833 ************************************ 00:06:45.833 16:22:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:45.833 16:22:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:45.833 16:22:03 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:45.833 16:22:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:45.833 16:22:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:45.833 16:22:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:45.833 16:22:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:45.833 16:22:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:45.833 16:22:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:45.833 rmmod nvme_tcp 00:06:45.833 rmmod nvme_fabrics 00:06:45.833 rmmod nvme_keyring 00:06:45.833 16:22:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:45.833 16:22:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:45.833 16:22:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:45.833 16:22:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:45.833 16:22:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:45.833 16:22:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:45.833 16:22:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:45.833 16:22:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:45.833 16:22:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:45.833 16:22:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.833 16:22:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:45.833 16:22:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.833 16:22:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:45.833 ************************************ 00:06:45.833 END TEST nvmf_filesystem 00:06:45.833 ************************************ 00:06:45.833 00:06:45.833 real 0m18.890s 00:06:45.833 user 1m8.173s 00:06:45.833 sys 0m3.742s 00:06:45.833 16:22:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.833 16:22:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:45.833 16:22:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:45.833 16:22:03 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:45.833 16:22:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:45.833 16:22:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.833 16:22:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:45.833 ************************************ 00:06:45.833 START TEST nvmf_target_discovery 00:06:45.833 ************************************ 00:06:45.833 16:22:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:46.090 * Looking for test storage... 00:06:46.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:46.090 16:22:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:46.090 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:46.090 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:46.090 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:46.090 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:46.090 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:46.090 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:46.090 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:46.090 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:46.090 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:46.090 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:46.090 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:46.090 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:06:46.090 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:46.091 Cannot find device "nvmf_tgt_br" 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:46.091 Cannot find device "nvmf_tgt_br2" 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:46.091 Cannot find device "nvmf_tgt_br" 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:46.091 Cannot find device "nvmf_tgt_br2" 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:46.091 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:46.091 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:46.091 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:46.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:46.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:06:46.349 00:06:46.349 --- 10.0.0.2 ping statistics --- 00:06:46.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.349 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:46.349 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:46.349 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:06:46.349 00:06:46.349 --- 10.0.0.3 ping statistics --- 00:06:46.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.349 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:46.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:46.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:06:46.349 00:06:46.349 --- 10.0.0.1 ping statistics --- 00:06:46.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.349 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=66285 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 66285 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 66285 ']' 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.349 16:22:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:46.349 [2024-07-21 16:22:04.512306] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:06:46.349 [2024-07-21 16:22:04.512430] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.607 [2024-07-21 16:22:04.652312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:46.607 [2024-07-21 16:22:04.780838] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:46.607 [2024-07-21 16:22:04.780906] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:46.607 [2024-07-21 16:22:04.780921] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:46.607 [2024-07-21 16:22:04.780931] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:46.607 [2024-07-21 16:22:04.780941] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:46.607 [2024-07-21 16:22:04.781109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.607 [2024-07-21 16:22:04.781185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.607 [2024-07-21 16:22:04.781779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.607 [2024-07-21 16:22:04.781797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.546 [2024-07-21 16:22:05.603890] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.546 Null1 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.546 [2024-07-21 16:22:05.657831] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.546 Null2 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.546 Null3 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.546 Null4 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.546 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.804 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.804 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:47.804 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.804 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.804 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.804 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:47.804 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.804 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.804 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.804 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -a 10.0.0.2 -s 4420 00:06:47.804 00:06:47.804 Discovery Log Number of Records 6, Generation counter 6 00:06:47.804 =====Discovery Log Entry 0====== 00:06:47.804 trtype: tcp 00:06:47.804 adrfam: ipv4 00:06:47.804 subtype: current discovery subsystem 00:06:47.804 treq: not required 00:06:47.804 portid: 0 00:06:47.804 trsvcid: 4420 00:06:47.804 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:47.804 traddr: 10.0.0.2 00:06:47.804 eflags: explicit discovery connections, duplicate discovery information 00:06:47.804 sectype: none 00:06:47.804 =====Discovery Log Entry 1====== 00:06:47.804 trtype: tcp 00:06:47.804 adrfam: ipv4 00:06:47.804 subtype: nvme subsystem 00:06:47.804 treq: not required 00:06:47.804 portid: 0 00:06:47.804 trsvcid: 4420 00:06:47.804 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:47.804 traddr: 10.0.0.2 00:06:47.804 eflags: none 00:06:47.804 sectype: none 00:06:47.804 =====Discovery Log Entry 2====== 00:06:47.804 trtype: tcp 00:06:47.804 adrfam: ipv4 00:06:47.804 subtype: nvme subsystem 00:06:47.804 treq: not required 00:06:47.804 portid: 0 00:06:47.804 trsvcid: 4420 00:06:47.804 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:47.804 traddr: 10.0.0.2 00:06:47.804 eflags: none 00:06:47.804 sectype: none 00:06:47.804 =====Discovery Log Entry 3====== 00:06:47.804 trtype: tcp 00:06:47.804 adrfam: ipv4 00:06:47.804 subtype: nvme subsystem 00:06:47.804 treq: not required 00:06:47.804 portid: 0 00:06:47.804 trsvcid: 4420 00:06:47.804 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:47.804 traddr: 10.0.0.2 00:06:47.804 eflags: none 00:06:47.804 sectype: none 00:06:47.804 =====Discovery Log Entry 4====== 00:06:47.804 trtype: tcp 00:06:47.804 adrfam: ipv4 00:06:47.804 subtype: nvme subsystem 00:06:47.804 treq: not required 00:06:47.804 portid: 0 00:06:47.804 trsvcid: 4420 00:06:47.804 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:47.804 traddr: 10.0.0.2 00:06:47.804 eflags: none 00:06:47.804 sectype: none 00:06:47.804 =====Discovery Log Entry 5====== 00:06:47.804 trtype: tcp 00:06:47.804 adrfam: ipv4 00:06:47.804 subtype: discovery subsystem referral 00:06:47.804 treq: not required 00:06:47.804 portid: 0 00:06:47.804 trsvcid: 4430 00:06:47.804 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:47.804 traddr: 10.0.0.2 00:06:47.804 eflags: none 00:06:47.804 sectype: none 00:06:47.804 Perform nvmf subsystem discovery via RPC 00:06:47.804 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:47.804 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.805 [ 00:06:47.805 { 00:06:47.805 "allow_any_host": true, 00:06:47.805 "hosts": [], 00:06:47.805 "listen_addresses": [ 00:06:47.805 { 00:06:47.805 "adrfam": "IPv4", 00:06:47.805 "traddr": "10.0.0.2", 00:06:47.805 "trsvcid": "4420", 00:06:47.805 "trtype": "TCP" 00:06:47.805 } 00:06:47.805 ], 00:06:47.805 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:47.805 "subtype": "Discovery" 00:06:47.805 }, 00:06:47.805 { 00:06:47.805 "allow_any_host": true, 00:06:47.805 "hosts": [], 00:06:47.805 "listen_addresses": [ 00:06:47.805 { 00:06:47.805 "adrfam": "IPv4", 00:06:47.805 "traddr": "10.0.0.2", 00:06:47.805 "trsvcid": "4420", 00:06:47.805 "trtype": "TCP" 00:06:47.805 } 00:06:47.805 ], 00:06:47.805 "max_cntlid": 65519, 00:06:47.805 "max_namespaces": 32, 00:06:47.805 "min_cntlid": 1, 00:06:47.805 "model_number": "SPDK bdev Controller", 00:06:47.805 "namespaces": [ 00:06:47.805 { 00:06:47.805 "bdev_name": "Null1", 00:06:47.805 "name": "Null1", 00:06:47.805 "nguid": "0131B8E6998442A2BD23BE44F91F46F4", 00:06:47.805 "nsid": 1, 00:06:47.805 "uuid": "0131b8e6-9984-42a2-bd23-be44f91f46f4" 00:06:47.805 } 00:06:47.805 ], 00:06:47.805 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:47.805 "serial_number": "SPDK00000000000001", 00:06:47.805 "subtype": "NVMe" 00:06:47.805 }, 00:06:47.805 { 00:06:47.805 "allow_any_host": true, 00:06:47.805 "hosts": [], 00:06:47.805 "listen_addresses": [ 00:06:47.805 { 00:06:47.805 "adrfam": "IPv4", 00:06:47.805 "traddr": "10.0.0.2", 00:06:47.805 "trsvcid": "4420", 00:06:47.805 "trtype": "TCP" 00:06:47.805 } 00:06:47.805 ], 00:06:47.805 "max_cntlid": 65519, 00:06:47.805 "max_namespaces": 32, 00:06:47.805 "min_cntlid": 1, 00:06:47.805 "model_number": "SPDK bdev Controller", 00:06:47.805 "namespaces": [ 00:06:47.805 { 00:06:47.805 "bdev_name": "Null2", 00:06:47.805 "name": "Null2", 00:06:47.805 "nguid": "BA4F113838994CDD9B24360C1D419EFB", 00:06:47.805 "nsid": 1, 00:06:47.805 "uuid": "ba4f1138-3899-4cdd-9b24-360c1d419efb" 00:06:47.805 } 00:06:47.805 ], 00:06:47.805 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:47.805 "serial_number": "SPDK00000000000002", 00:06:47.805 "subtype": "NVMe" 00:06:47.805 }, 00:06:47.805 { 00:06:47.805 "allow_any_host": true, 00:06:47.805 "hosts": [], 00:06:47.805 "listen_addresses": [ 00:06:47.805 { 00:06:47.805 "adrfam": "IPv4", 00:06:47.805 "traddr": "10.0.0.2", 00:06:47.805 "trsvcid": "4420", 00:06:47.805 "trtype": "TCP" 00:06:47.805 } 00:06:47.805 ], 00:06:47.805 "max_cntlid": 65519, 00:06:47.805 "max_namespaces": 32, 00:06:47.805 "min_cntlid": 1, 00:06:47.805 "model_number": "SPDK bdev Controller", 00:06:47.805 "namespaces": [ 00:06:47.805 { 00:06:47.805 "bdev_name": "Null3", 00:06:47.805 "name": "Null3", 00:06:47.805 "nguid": "80DEE28D2BBA441C96135440DB64A745", 00:06:47.805 "nsid": 1, 00:06:47.805 "uuid": "80dee28d-2bba-441c-9613-5440db64a745" 00:06:47.805 } 00:06:47.805 ], 00:06:47.805 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:47.805 "serial_number": "SPDK00000000000003", 00:06:47.805 "subtype": "NVMe" 00:06:47.805 }, 00:06:47.805 { 00:06:47.805 "allow_any_host": true, 00:06:47.805 "hosts": [], 00:06:47.805 "listen_addresses": [ 00:06:47.805 { 00:06:47.805 "adrfam": "IPv4", 00:06:47.805 "traddr": "10.0.0.2", 00:06:47.805 "trsvcid": "4420", 00:06:47.805 "trtype": "TCP" 00:06:47.805 } 00:06:47.805 ], 00:06:47.805 "max_cntlid": 65519, 00:06:47.805 "max_namespaces": 32, 00:06:47.805 "min_cntlid": 1, 00:06:47.805 "model_number": "SPDK bdev Controller", 00:06:47.805 "namespaces": [ 00:06:47.805 { 00:06:47.805 "bdev_name": "Null4", 00:06:47.805 "name": "Null4", 00:06:47.805 "nguid": "966223D21B4A4F55B0DF540227B3A9BE", 00:06:47.805 "nsid": 1, 00:06:47.805 "uuid": "966223d2-1b4a-4f55-b0df-540227b3a9be" 00:06:47.805 } 00:06:47.805 ], 00:06:47.805 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:47.805 "serial_number": "SPDK00000000000004", 00:06:47.805 "subtype": "NVMe" 00:06:47.805 } 00:06:47.805 ] 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:47.805 16:22:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.805 16:22:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:47.805 16:22:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:47.805 16:22:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:47.805 16:22:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:47.805 16:22:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:47.805 16:22:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:48.063 16:22:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:48.063 16:22:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:48.063 16:22:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:48.063 16:22:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:48.063 rmmod nvme_tcp 00:06:48.063 rmmod nvme_fabrics 00:06:48.063 rmmod nvme_keyring 00:06:48.063 16:22:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:48.063 16:22:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:48.063 16:22:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:48.063 16:22:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 66285 ']' 00:06:48.063 16:22:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 66285 00:06:48.063 16:22:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 66285 ']' 00:06:48.063 16:22:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 66285 00:06:48.063 16:22:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:06:48.063 16:22:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:48.063 16:22:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66285 00:06:48.063 16:22:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:48.063 16:22:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:48.063 killing process with pid 66285 00:06:48.063 16:22:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66285' 00:06:48.063 16:22:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 66285 00:06:48.063 16:22:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 66285 00:06:48.321 16:22:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:48.321 16:22:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:48.321 16:22:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:48.321 16:22:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:48.321 16:22:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:48.321 16:22:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.321 16:22:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:48.321 16:22:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.321 16:22:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:48.321 00:06:48.321 real 0m2.391s 00:06:48.321 user 0m6.622s 00:06:48.321 sys 0m0.630s 00:06:48.321 16:22:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.321 16:22:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:48.321 ************************************ 00:06:48.321 END TEST nvmf_target_discovery 00:06:48.321 ************************************ 00:06:48.321 16:22:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:48.321 16:22:06 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:48.321 16:22:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:48.321 16:22:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.321 16:22:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:48.321 ************************************ 00:06:48.321 START TEST nvmf_referrals 00:06:48.321 ************************************ 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:48.321 * Looking for test storage... 00:06:48.321 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.321 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:48.322 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:48.579 Cannot find device "nvmf_tgt_br" 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:48.579 Cannot find device "nvmf_tgt_br2" 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:48.579 Cannot find device "nvmf_tgt_br" 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:48.579 Cannot find device "nvmf_tgt_br2" 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:48.579 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:48.579 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:48.579 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:48.835 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:48.835 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:48.835 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:48.835 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:48.835 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:48.835 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:48.835 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:48.835 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:48.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:48.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:06:48.835 00:06:48.835 --- 10.0.0.2 ping statistics --- 00:06:48.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.835 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:06:48.835 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:48.835 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:48.835 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:06:48.835 00:06:48.835 --- 10.0.0.3 ping statistics --- 00:06:48.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.835 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:06:48.835 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:48.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:48.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:06:48.835 00:06:48.835 --- 10.0.0.1 ping statistics --- 00:06:48.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.835 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:06:48.835 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:48.835 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:06:48.835 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:48.835 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:48.835 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:48.835 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:48.835 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:48.835 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:48.835 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:48.835 16:22:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:48.835 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:48.835 16:22:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:48.836 16:22:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:48.836 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=66510 00:06:48.836 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:48.836 16:22:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 66510 00:06:48.836 16:22:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 66510 ']' 00:06:48.836 16:22:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.836 16:22:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.836 16:22:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.836 16:22:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.836 16:22:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:48.836 [2024-07-21 16:22:06.946374] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:06:48.836 [2024-07-21 16:22:06.946473] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.091 [2024-07-21 16:22:07.088767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:49.091 [2024-07-21 16:22:07.177365] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:49.091 [2024-07-21 16:22:07.177418] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:49.092 [2024-07-21 16:22:07.177435] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:49.092 [2024-07-21 16:22:07.177450] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:49.092 [2024-07-21 16:22:07.177461] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:49.092 [2024-07-21 16:22:07.177843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.092 [2024-07-21 16:22:07.177930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.092 [2024-07-21 16:22:07.178028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.092 [2024-07-21 16:22:07.178004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.022 16:22:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.022 16:22:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:06:50.022 16:22:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:50.022 16:22:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:50.022 16:22:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:50.022 16:22:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:50.022 16:22:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:50.022 16:22:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.022 16:22:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:50.022 [2024-07-21 16:22:07.998002] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:50.022 [2024-07-21 16:22:08.033127] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:50.022 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:50.280 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:50.281 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:50.281 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:50.281 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:50.538 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:50.538 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:50.538 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:50.538 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:50.538 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:50.538 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:50.538 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:50.538 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:50.538 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:50.538 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:50.538 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:50.538 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:50.538 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:50.538 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:50.538 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:50.538 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.538 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:50.539 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.539 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:50.539 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:50.539 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:50.539 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:50.539 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.539 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:50.539 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:50.539 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.539 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:50.539 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:50.795 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:50.795 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:50.795 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:50.795 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:50.795 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:50.795 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:50.795 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:50.795 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:50.795 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:50.795 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:50.795 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:50.795 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:50.795 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:50.795 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:50.795 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:50.795 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:50.795 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:50.795 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:50.795 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:50.795 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:50.795 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:50.795 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.795 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:50.795 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.795 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:50.795 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.795 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:50.795 16:22:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:06:50.795 16:22:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:51.051 rmmod nvme_tcp 00:06:51.051 rmmod nvme_fabrics 00:06:51.051 rmmod nvme_keyring 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 66510 ']' 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 66510 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 66510 ']' 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 66510 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66510 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:51.051 killing process with pid 66510 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66510' 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 66510 00:06:51.051 16:22:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 66510 00:06:51.309 16:22:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:51.309 16:22:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:51.309 16:22:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:51.309 16:22:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:51.309 16:22:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:51.309 16:22:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:51.309 16:22:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:51.309 16:22:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.309 16:22:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:51.309 00:06:51.309 real 0m3.031s 00:06:51.309 user 0m9.798s 00:06:51.309 sys 0m0.825s 00:06:51.309 16:22:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.309 16:22:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:51.309 ************************************ 00:06:51.309 END TEST nvmf_referrals 00:06:51.309 ************************************ 00:06:51.309 16:22:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:51.310 16:22:09 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:51.310 16:22:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:51.310 16:22:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.310 16:22:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:51.310 ************************************ 00:06:51.310 START TEST nvmf_connect_disconnect 00:06:51.310 ************************************ 00:06:51.310 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:51.572 * Looking for test storage... 00:06:51.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:51.572 Cannot find device "nvmf_tgt_br" 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:51.572 Cannot find device "nvmf_tgt_br2" 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:51.572 Cannot find device "nvmf_tgt_br" 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:51.572 Cannot find device "nvmf_tgt_br2" 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:51.572 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:51.572 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:51.572 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:51.836 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:51.836 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:06:51.836 00:06:51.836 --- 10.0.0.2 ping statistics --- 00:06:51.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.836 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:51.836 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:51.836 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:06:51.836 00:06:51.836 --- 10.0.0.3 ping statistics --- 00:06:51.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.836 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:51.836 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:51.836 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:06:51.836 00:06:51.836 --- 10.0.0.1 ping statistics --- 00:06:51.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.836 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=66813 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 66813 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 66813 ']' 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.836 16:22:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:52.093 [2024-07-21 16:22:10.049287] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:06:52.093 [2024-07-21 16:22:10.049396] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:52.093 [2024-07-21 16:22:10.189729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:52.350 [2024-07-21 16:22:10.304580] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:52.350 [2024-07-21 16:22:10.304656] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:52.350 [2024-07-21 16:22:10.304682] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:52.350 [2024-07-21 16:22:10.304690] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:52.350 [2024-07-21 16:22:10.304697] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:52.350 [2024-07-21 16:22:10.304842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.350 [2024-07-21 16:22:10.304987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.350 [2024-07-21 16:22:10.305355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.350 [2024-07-21 16:22:10.305359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.915 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.915 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:06:52.915 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:52.915 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:52.915 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:52.915 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:52.915 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:52.915 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.915 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:52.915 [2024-07-21 16:22:11.107647] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.172 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.172 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:53.172 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.172 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:53.172 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.172 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:53.172 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:53.172 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.172 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:53.172 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.172 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:53.172 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.172 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:53.172 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.172 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:53.172 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.172 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:53.172 [2024-07-21 16:22:11.184031] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:53.172 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.172 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:53.172 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:53.172 16:22:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:06:55.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:57.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:00.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:02.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:04.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:04.538 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:04.538 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:04.538 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:04.538 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:04.538 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:04.538 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:04.538 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:04.538 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:04.538 rmmod nvme_tcp 00:07:04.538 rmmod nvme_fabrics 00:07:04.538 rmmod nvme_keyring 00:07:04.538 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:04.538 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:04.538 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:04.538 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 66813 ']' 00:07:04.538 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 66813 00:07:04.538 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 66813 ']' 00:07:04.538 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 66813 00:07:04.538 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:07:04.538 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:04.538 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66813 00:07:04.538 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:04.538 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:04.538 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66813' 00:07:04.538 killing process with pid 66813 00:07:04.538 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 66813 00:07:04.538 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 66813 00:07:04.796 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:04.796 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:04.796 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:04.796 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:04.796 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:04.796 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.796 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:04.796 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.796 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:04.796 00:07:04.796 real 0m13.353s 00:07:04.796 user 0m49.114s 00:07:04.796 sys 0m1.623s 00:07:04.796 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.796 ************************************ 00:07:04.796 END TEST nvmf_connect_disconnect 00:07:04.796 ************************************ 00:07:04.796 16:22:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:04.796 16:22:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:04.796 16:22:22 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:04.796 16:22:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:04.796 16:22:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.796 16:22:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:04.796 ************************************ 00:07:04.796 START TEST nvmf_multitarget 00:07:04.796 ************************************ 00:07:04.796 16:22:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:04.796 * Looking for test storage... 00:07:04.796 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:04.796 16:22:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:04.796 16:22:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:04.796 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:05.054 Cannot find device "nvmf_tgt_br" 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:05.054 Cannot find device "nvmf_tgt_br2" 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:05.054 Cannot find device "nvmf_tgt_br" 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:05.054 Cannot find device "nvmf_tgt_br2" 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:05.054 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:05.054 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:05.054 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:05.311 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:05.311 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:05.311 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:05.311 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:05.311 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:05.311 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:05.311 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:05.311 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:05.311 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:05.311 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:05.311 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:05.311 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:05.311 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:05.311 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:05.311 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:05.311 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:05.311 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:05.311 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:05.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:05.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:07:05.311 00:07:05.311 --- 10.0.0.2 ping statistics --- 00:07:05.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.311 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:07:05.311 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:05.311 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:05.311 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:07:05.311 00:07:05.312 --- 10.0.0.3 ping statistics --- 00:07:05.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.312 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:07:05.312 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:05.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:05.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:07:05.312 00:07:05.312 --- 10.0.0.1 ping statistics --- 00:07:05.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.312 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:07:05.312 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:05.312 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:07:05.312 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:05.312 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:05.312 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:05.312 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:05.312 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:05.312 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:05.312 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:05.312 16:22:23 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:05.312 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:05.312 16:22:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:05.312 16:22:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:05.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.312 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=67216 00:07:05.312 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:05.312 16:22:23 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 67216 00:07:05.312 16:22:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 67216 ']' 00:07:05.312 16:22:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.312 16:22:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.312 16:22:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.312 16:22:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.312 16:22:23 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:05.312 [2024-07-21 16:22:23.481755] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:05.312 [2024-07-21 16:22:23.482162] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:05.569 [2024-07-21 16:22:23.629123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:05.569 [2024-07-21 16:22:23.775975] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:05.569 [2024-07-21 16:22:23.776279] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:05.569 [2024-07-21 16:22:23.776466] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:05.569 [2024-07-21 16:22:23.776615] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:05.569 [2024-07-21 16:22:23.776666] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:05.569 [2024-07-21 16:22:23.776950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.569 [2024-07-21 16:22:23.777067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.569 [2024-07-21 16:22:23.777204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.826 [2024-07-21 16:22:23.777205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:06.390 16:22:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.390 16:22:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:07:06.390 16:22:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:06.390 16:22:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:06.390 16:22:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:06.390 16:22:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:06.390 16:22:24 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:06.390 16:22:24 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:06.390 16:22:24 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:06.647 16:22:24 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:06.647 16:22:24 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:06.647 "nvmf_tgt_1" 00:07:06.647 16:22:24 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:06.904 "nvmf_tgt_2" 00:07:06.904 16:22:24 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:06.904 16:22:24 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:07.170 16:22:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:07.170 16:22:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:07.170 true 00:07:07.171 16:22:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:07.444 true 00:07:07.444 16:22:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:07.444 16:22:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:07.444 16:22:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:07.444 16:22:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:07.444 16:22:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:07.444 16:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:07.444 16:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:07.444 16:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:07.444 16:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:07.444 16:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:07.444 16:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:07.444 rmmod nvme_tcp 00:07:07.444 rmmod nvme_fabrics 00:07:07.703 rmmod nvme_keyring 00:07:07.703 16:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:07.703 16:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:07.703 16:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:07.703 16:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 67216 ']' 00:07:07.703 16:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 67216 00:07:07.703 16:22:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 67216 ']' 00:07:07.703 16:22:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 67216 00:07:07.703 16:22:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:07:07.703 16:22:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:07.703 16:22:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67216 00:07:07.703 killing process with pid 67216 00:07:07.703 16:22:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:07.703 16:22:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:07.703 16:22:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67216' 00:07:07.703 16:22:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 67216 00:07:07.703 16:22:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 67216 00:07:07.961 16:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:07.961 16:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:07.961 16:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:07.961 16:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:07.961 16:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:07.961 16:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.961 16:22:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:07.961 16:22:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.961 16:22:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:07.961 ************************************ 00:07:07.961 END TEST nvmf_multitarget 00:07:07.961 ************************************ 00:07:07.961 00:07:07.961 real 0m3.061s 00:07:07.961 user 0m9.904s 00:07:07.961 sys 0m0.764s 00:07:07.961 16:22:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.961 16:22:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:07.961 16:22:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:07.961 16:22:26 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:07.961 16:22:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:07.961 16:22:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.961 16:22:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:07.961 ************************************ 00:07:07.961 START TEST nvmf_rpc 00:07:07.961 ************************************ 00:07:07.961 16:22:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:07.961 * Looking for test storage... 00:07:07.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:07.961 16:22:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:07.961 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:07.961 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.961 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.961 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.961 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.961 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.961 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.961 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.961 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.961 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.961 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.961 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:07:07.961 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:07:07.961 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.961 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.961 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:07.961 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.961 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:07.961 16:22:26 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.961 16:22:26 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.961 16:22:26 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:07.962 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:08.220 Cannot find device "nvmf_tgt_br" 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:08.220 Cannot find device "nvmf_tgt_br2" 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:08.220 Cannot find device "nvmf_tgt_br" 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:08.220 Cannot find device "nvmf_tgt_br2" 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:08.220 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:08.220 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:08.220 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:08.478 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:08.478 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:08.478 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:08.478 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:08.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:08.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:07:08.478 00:07:08.478 --- 10.0.0.2 ping statistics --- 00:07:08.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.478 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:07:08.478 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:08.478 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:08.478 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:07:08.478 00:07:08.478 --- 10.0.0.3 ping statistics --- 00:07:08.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.478 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:07:08.478 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:08.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:08.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:07:08.478 00:07:08.478 --- 10.0.0.1 ping statistics --- 00:07:08.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.478 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:07:08.478 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:08.478 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:07:08.478 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:08.478 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:08.478 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:08.478 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:08.478 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:08.478 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:08.478 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:08.478 16:22:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:08.478 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:08.478 16:22:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:08.478 16:22:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.478 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=67447 00:07:08.478 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:08.478 16:22:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 67447 00:07:08.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.478 16:22:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 67447 ']' 00:07:08.478 16:22:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.478 16:22:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.478 16:22:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.478 16:22:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.478 16:22:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.478 [2024-07-21 16:22:26.556806] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:08.478 [2024-07-21 16:22:26.557085] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.737 [2024-07-21 16:22:26.695734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:08.737 [2024-07-21 16:22:26.787323] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:08.737 [2024-07-21 16:22:26.787597] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:08.737 [2024-07-21 16:22:26.787618] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:08.737 [2024-07-21 16:22:26.787627] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:08.737 [2024-07-21 16:22:26.787634] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:08.737 [2024-07-21 16:22:26.787836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.737 [2024-07-21 16:22:26.789173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.737 [2024-07-21 16:22:26.789438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.737 [2024-07-21 16:22:26.789337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.670 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.670 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:09.670 16:22:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:09.670 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:09.670 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.670 16:22:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:09.670 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:09.670 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.670 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.670 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.670 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:09.670 "poll_groups": [ 00:07:09.670 { 00:07:09.670 "admin_qpairs": 0, 00:07:09.670 "completed_nvme_io": 0, 00:07:09.670 "current_admin_qpairs": 0, 00:07:09.670 "current_io_qpairs": 0, 00:07:09.670 "io_qpairs": 0, 00:07:09.670 "name": "nvmf_tgt_poll_group_000", 00:07:09.670 "pending_bdev_io": 0, 00:07:09.670 "transports": [] 00:07:09.670 }, 00:07:09.670 { 00:07:09.670 "admin_qpairs": 0, 00:07:09.670 "completed_nvme_io": 0, 00:07:09.670 "current_admin_qpairs": 0, 00:07:09.670 "current_io_qpairs": 0, 00:07:09.670 "io_qpairs": 0, 00:07:09.670 "name": "nvmf_tgt_poll_group_001", 00:07:09.670 "pending_bdev_io": 0, 00:07:09.670 "transports": [] 00:07:09.670 }, 00:07:09.670 { 00:07:09.670 "admin_qpairs": 0, 00:07:09.670 "completed_nvme_io": 0, 00:07:09.670 "current_admin_qpairs": 0, 00:07:09.670 "current_io_qpairs": 0, 00:07:09.670 "io_qpairs": 0, 00:07:09.670 "name": "nvmf_tgt_poll_group_002", 00:07:09.670 "pending_bdev_io": 0, 00:07:09.670 "transports": [] 00:07:09.670 }, 00:07:09.670 { 00:07:09.670 "admin_qpairs": 0, 00:07:09.670 "completed_nvme_io": 0, 00:07:09.670 "current_admin_qpairs": 0, 00:07:09.670 "current_io_qpairs": 0, 00:07:09.670 "io_qpairs": 0, 00:07:09.670 "name": "nvmf_tgt_poll_group_003", 00:07:09.670 "pending_bdev_io": 0, 00:07:09.670 "transports": [] 00:07:09.670 } 00:07:09.670 ], 00:07:09.670 "tick_rate": 2200000000 00:07:09.670 }' 00:07:09.670 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:09.670 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:09.670 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:09.670 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:09.670 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:09.670 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:09.670 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:09.670 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:09.670 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.670 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.670 [2024-07-21 16:22:27.765211] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.670 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.670 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:09.670 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.671 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.671 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.671 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:09.671 "poll_groups": [ 00:07:09.671 { 00:07:09.671 "admin_qpairs": 0, 00:07:09.671 "completed_nvme_io": 0, 00:07:09.671 "current_admin_qpairs": 0, 00:07:09.671 "current_io_qpairs": 0, 00:07:09.671 "io_qpairs": 0, 00:07:09.671 "name": "nvmf_tgt_poll_group_000", 00:07:09.671 "pending_bdev_io": 0, 00:07:09.671 "transports": [ 00:07:09.671 { 00:07:09.671 "trtype": "TCP" 00:07:09.671 } 00:07:09.671 ] 00:07:09.671 }, 00:07:09.671 { 00:07:09.671 "admin_qpairs": 0, 00:07:09.671 "completed_nvme_io": 0, 00:07:09.671 "current_admin_qpairs": 0, 00:07:09.671 "current_io_qpairs": 0, 00:07:09.671 "io_qpairs": 0, 00:07:09.671 "name": "nvmf_tgt_poll_group_001", 00:07:09.671 "pending_bdev_io": 0, 00:07:09.671 "transports": [ 00:07:09.671 { 00:07:09.671 "trtype": "TCP" 00:07:09.671 } 00:07:09.671 ] 00:07:09.671 }, 00:07:09.671 { 00:07:09.671 "admin_qpairs": 0, 00:07:09.671 "completed_nvme_io": 0, 00:07:09.671 "current_admin_qpairs": 0, 00:07:09.671 "current_io_qpairs": 0, 00:07:09.671 "io_qpairs": 0, 00:07:09.671 "name": "nvmf_tgt_poll_group_002", 00:07:09.671 "pending_bdev_io": 0, 00:07:09.671 "transports": [ 00:07:09.671 { 00:07:09.671 "trtype": "TCP" 00:07:09.671 } 00:07:09.671 ] 00:07:09.671 }, 00:07:09.671 { 00:07:09.671 "admin_qpairs": 0, 00:07:09.671 "completed_nvme_io": 0, 00:07:09.671 "current_admin_qpairs": 0, 00:07:09.671 "current_io_qpairs": 0, 00:07:09.671 "io_qpairs": 0, 00:07:09.671 "name": "nvmf_tgt_poll_group_003", 00:07:09.671 "pending_bdev_io": 0, 00:07:09.671 "transports": [ 00:07:09.671 { 00:07:09.671 "trtype": "TCP" 00:07:09.671 } 00:07:09.671 ] 00:07:09.671 } 00:07:09.671 ], 00:07:09.671 "tick_rate": 2200000000 00:07:09.671 }' 00:07:09.671 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:09.671 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:09.671 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:09.671 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:09.671 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:09.671 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:09.671 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:09.671 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:09.671 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:09.929 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:09.929 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:09.929 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:09.929 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:09.929 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:09.929 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.929 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.929 Malloc1 00:07:09.929 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.929 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:09.929 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.929 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.929 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.929 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:09.929 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.929 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.929 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.929 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:09.929 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.929 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.929 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.929 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:09.929 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.929 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.929 [2024-07-21 16:22:27.975087] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:09.929 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.929 16:22:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -a 10.0.0.2 -s 4420 00:07:09.930 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:09.930 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -a 10.0.0.2 -s 4420 00:07:09.930 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:09.930 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.930 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:09.930 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.930 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:09.930 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:09.930 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:09.930 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:09.930 16:22:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -a 10.0.0.2 -s 4420 00:07:09.930 [2024-07-21 16:22:27.999632] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f' 00:07:09.930 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:09.930 could not add new controller: failed to write to nvme-fabrics device 00:07:09.930 16:22:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:09.930 16:22:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:09.930 16:22:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:09.930 16:22:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:09.930 16:22:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:07:09.930 16:22:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.930 16:22:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.930 16:22:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.930 16:22:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:10.188 16:22:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:10.188 16:22:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:10.188 16:22:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:10.188 16:22:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:10.188 16:22:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:12.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:12.088 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:12.346 [2024-07-21 16:22:30.311389] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f' 00:07:12.346 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:12.346 could not add new controller: failed to write to nvme-fabrics device 00:07:12.346 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:12.346 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:12.347 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:12.347 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:12.347 16:22:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:12.347 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.347 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.347 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.347 16:22:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:12.347 16:22:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:12.347 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:12.347 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:12.347 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:12.347 16:22:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:14.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.879 [2024-07-21 16:22:32.622475] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:14.879 16:22:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:16.778 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:16.778 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:16.778 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:16.778 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:16.778 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:16.778 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:16.778 16:22:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:16.778 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:16.778 16:22:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:16.778 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:16.778 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.779 [2024-07-21 16:22:34.923513] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.779 16:22:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:17.037 16:22:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:17.037 16:22:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:17.037 16:22:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:17.037 16:22:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:17.037 16:22:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:18.934 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:18.934 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:18.934 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:18.934 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:18.934 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:18.934 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:18.934 16:22:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:19.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.193 [2024-07-21 16:22:37.321119] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.193 16:22:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:19.451 16:22:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:19.451 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:19.451 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:19.451 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:19.451 16:22:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:21.349 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:21.349 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:21.349 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:21.349 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:21.349 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:21.349 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:21.349 16:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:21.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.607 [2024-07-21 16:22:39.622829] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:21.607 16:22:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:24.134 16:22:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:24.134 16:22:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:24.134 16:22:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:24.134 16:22:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:24.134 16:22:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:24.134 16:22:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:24.134 16:22:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:24.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:24.134 16:22:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:24.134 16:22:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:24.134 16:22:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:24.134 16:22:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:24.134 16:22:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:24.134 16:22:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.134 [2024-07-21 16:22:42.036204] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:24.134 16:22:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:26.655 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.655 [2024-07-21 16:22:44.453177] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.655 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.656 [2024-07-21 16:22:44.501194] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.656 [2024-07-21 16:22:44.549237] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.656 [2024-07-21 16:22:44.597345] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.656 [2024-07-21 16:22:44.645488] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:26.656 "poll_groups": [ 00:07:26.656 { 00:07:26.656 "admin_qpairs": 2, 00:07:26.656 "completed_nvme_io": 164, 00:07:26.656 "current_admin_qpairs": 0, 00:07:26.656 "current_io_qpairs": 0, 00:07:26.656 "io_qpairs": 16, 00:07:26.656 "name": "nvmf_tgt_poll_group_000", 00:07:26.656 "pending_bdev_io": 0, 00:07:26.656 "transports": [ 00:07:26.656 { 00:07:26.656 "trtype": "TCP" 00:07:26.656 } 00:07:26.656 ] 00:07:26.656 }, 00:07:26.656 { 00:07:26.656 "admin_qpairs": 3, 00:07:26.656 "completed_nvme_io": 116, 00:07:26.656 "current_admin_qpairs": 0, 00:07:26.656 "current_io_qpairs": 0, 00:07:26.656 "io_qpairs": 17, 00:07:26.656 "name": "nvmf_tgt_poll_group_001", 00:07:26.656 "pending_bdev_io": 0, 00:07:26.656 "transports": [ 00:07:26.656 { 00:07:26.656 "trtype": "TCP" 00:07:26.656 } 00:07:26.656 ] 00:07:26.656 }, 00:07:26.656 { 00:07:26.656 "admin_qpairs": 1, 00:07:26.656 "completed_nvme_io": 69, 00:07:26.656 "current_admin_qpairs": 0, 00:07:26.656 "current_io_qpairs": 0, 00:07:26.656 "io_qpairs": 19, 00:07:26.656 "name": "nvmf_tgt_poll_group_002", 00:07:26.656 "pending_bdev_io": 0, 00:07:26.656 "transports": [ 00:07:26.656 { 00:07:26.656 "trtype": "TCP" 00:07:26.656 } 00:07:26.656 ] 00:07:26.656 }, 00:07:26.656 { 00:07:26.656 "admin_qpairs": 1, 00:07:26.656 "completed_nvme_io": 71, 00:07:26.656 "current_admin_qpairs": 0, 00:07:26.656 "current_io_qpairs": 0, 00:07:26.656 "io_qpairs": 18, 00:07:26.656 "name": "nvmf_tgt_poll_group_003", 00:07:26.656 "pending_bdev_io": 0, 00:07:26.656 "transports": [ 00:07:26.656 { 00:07:26.656 "trtype": "TCP" 00:07:26.656 } 00:07:26.656 ] 00:07:26.656 } 00:07:26.656 ], 00:07:26.656 "tick_rate": 2200000000 00:07:26.656 }' 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:26.656 16:22:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:26.656 rmmod nvme_tcp 00:07:26.656 rmmod nvme_fabrics 00:07:26.914 rmmod nvme_keyring 00:07:26.914 16:22:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:26.914 16:22:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:07:26.914 16:22:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:07:26.914 16:22:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 67447 ']' 00:07:26.914 16:22:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 67447 00:07:26.914 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 67447 ']' 00:07:26.914 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 67447 00:07:26.914 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:07:26.914 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:26.914 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67447 00:07:26.914 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:26.914 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:26.914 killing process with pid 67447 00:07:26.914 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67447' 00:07:26.914 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 67447 00:07:26.914 16:22:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 67447 00:07:27.173 16:22:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:27.173 16:22:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:27.173 16:22:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:27.173 16:22:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:27.173 16:22:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:27.173 16:22:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.173 16:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:27.173 16:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.173 16:22:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:27.173 00:07:27.173 real 0m19.198s 00:07:27.173 user 1m12.709s 00:07:27.173 sys 0m2.193s 00:07:27.173 16:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.173 16:22:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.173 ************************************ 00:07:27.173 END TEST nvmf_rpc 00:07:27.173 ************************************ 00:07:27.173 16:22:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:27.173 16:22:45 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:27.173 16:22:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:27.173 16:22:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.173 16:22:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:27.173 ************************************ 00:07:27.173 START TEST nvmf_invalid 00:07:27.173 ************************************ 00:07:27.173 16:22:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:27.173 * Looking for test storage... 00:07:27.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:27.173 16:22:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:27.173 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:07:27.173 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:27.173 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:27.173 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:27.173 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:27.173 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:27.173 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:27.173 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:27.173 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:27.173 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:27.173 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:27.173 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:07:27.173 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:07:27.173 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:27.173 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:27.173 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:27.173 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:27.173 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:27.173 16:22:45 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.173 16:22:45 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.173 16:22:45 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.173 16:22:45 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.173 16:22:45 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.173 16:22:45 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.173 16:22:45 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:07:27.173 16:22:45 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.174 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:07:27.174 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:27.174 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:27.174 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:27.174 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:27.174 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:27.174 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:27.174 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:27.174 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:27.174 16:22:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:07:27.174 16:22:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.174 16:22:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:27.174 16:22:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:07:27.174 16:22:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:07:27.174 16:22:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:07:27.174 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:27.174 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:27.174 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:27.174 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:27.174 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:27.174 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.174 16:22:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:27.174 16:22:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:27.431 Cannot find device "nvmf_tgt_br" 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:27.431 Cannot find device "nvmf_tgt_br2" 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:27.431 Cannot find device "nvmf_tgt_br" 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:27.431 Cannot find device "nvmf_tgt_br2" 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:27.431 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:27.431 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:27.431 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:27.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:27.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:07:27.689 00:07:27.689 --- 10.0.0.2 ping statistics --- 00:07:27.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.689 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:27.689 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:27.689 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:07:27.689 00:07:27.689 --- 10.0.0.3 ping statistics --- 00:07:27.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.689 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:27.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:27.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:07:27.689 00:07:27.689 --- 10.0.0.1 ping statistics --- 00:07:27.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.689 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=67964 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 67964 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 67964 ']' 00:07:27.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:27.689 16:22:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:27.689 [2024-07-21 16:22:45.798040] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:27.689 [2024-07-21 16:22:45.798119] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.947 [2024-07-21 16:22:45.933773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:27.947 [2024-07-21 16:22:46.047273] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:27.947 [2024-07-21 16:22:46.047558] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:27.947 [2024-07-21 16:22:46.047814] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:27.947 [2024-07-21 16:22:46.047960] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:27.947 [2024-07-21 16:22:46.048191] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:27.947 [2024-07-21 16:22:46.048416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.947 [2024-07-21 16:22:46.048484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.947 [2024-07-21 16:22:46.052184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.947 [2024-07-21 16:22:46.052250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.879 16:22:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:28.879 16:22:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:07:28.879 16:22:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:28.879 16:22:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:28.879 16:22:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:28.879 16:22:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:28.879 16:22:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:28.879 16:22:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode18136 00:07:29.136 [2024-07-21 16:22:47.164879] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:29.136 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/07/21 16:22:47 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18136 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:07:29.136 request: 00:07:29.136 { 00:07:29.136 "method": "nvmf_create_subsystem", 00:07:29.136 "params": { 00:07:29.136 "nqn": "nqn.2016-06.io.spdk:cnode18136", 00:07:29.136 "tgt_name": "foobar" 00:07:29.136 } 00:07:29.136 } 00:07:29.136 Got JSON-RPC error response 00:07:29.136 GoRPCClient: error on JSON-RPC call' 00:07:29.136 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/07/21 16:22:47 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18136 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:07:29.136 request: 00:07:29.136 { 00:07:29.136 "method": "nvmf_create_subsystem", 00:07:29.136 "params": { 00:07:29.136 "nqn": "nqn.2016-06.io.spdk:cnode18136", 00:07:29.136 "tgt_name": "foobar" 00:07:29.136 } 00:07:29.136 } 00:07:29.136 Got JSON-RPC error response 00:07:29.136 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:29.136 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:29.136 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18898 00:07:29.394 [2024-07-21 16:22:47.445390] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18898: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:29.394 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/07/21 16:22:47 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18898 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:07:29.394 request: 00:07:29.394 { 00:07:29.394 "method": "nvmf_create_subsystem", 00:07:29.394 "params": { 00:07:29.394 "nqn": "nqn.2016-06.io.spdk:cnode18898", 00:07:29.394 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:07:29.394 } 00:07:29.394 } 00:07:29.394 Got JSON-RPC error response 00:07:29.394 GoRPCClient: error on JSON-RPC call' 00:07:29.394 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/07/21 16:22:47 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode18898 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:07:29.394 request: 00:07:29.394 { 00:07:29.394 "method": "nvmf_create_subsystem", 00:07:29.394 "params": { 00:07:29.394 "nqn": "nqn.2016-06.io.spdk:cnode18898", 00:07:29.394 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:07:29.394 } 00:07:29.394 } 00:07:29.394 Got JSON-RPC error response 00:07:29.394 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:29.394 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:29.394 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode13174 00:07:29.653 [2024-07-21 16:22:47.729871] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13174: invalid model number 'SPDK_Controller' 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/07/21 16:22:47 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode13174], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:07:29.653 request: 00:07:29.653 { 00:07:29.653 "method": "nvmf_create_subsystem", 00:07:29.653 "params": { 00:07:29.653 "nqn": "nqn.2016-06.io.spdk:cnode13174", 00:07:29.653 "model_number": "SPDK_Controller\u001f" 00:07:29.653 } 00:07:29.653 } 00:07:29.653 Got JSON-RPC error response 00:07:29.653 GoRPCClient: error on JSON-RPC call' 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/07/21 16:22:47 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode13174], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:07:29.653 request: 00:07:29.653 { 00:07:29.653 "method": "nvmf_create_subsystem", 00:07:29.653 "params": { 00:07:29.653 "nqn": "nqn.2016-06.io.spdk:cnode13174", 00:07:29.653 "model_number": "SPDK_Controller\u001f" 00:07:29.653 } 00:07:29.653 } 00:07:29.653 Got JSON-RPC error response 00:07:29.653 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:29.653 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:29.911 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:07:29.911 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:07:29.911 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:07:29.911 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:29.911 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:29.911 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:07:29.911 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:07:29.911 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:07:29.911 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:29.911 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:29.911 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:07:29.911 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:07:29.911 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:07:29.911 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:29.911 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:29.911 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ * == \- ]] 00:07:29.911 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '*'\''d82cr{FBK-2R$T#nd:d' 00:07:29.912 16:22:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '*'\''d82cr{FBK-2R$T#nd:d' nqn.2016-06.io.spdk:cnode27344 00:07:30.169 [2024-07-21 16:22:48.158457] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27344: invalid serial number '*'d82cr{FBK-2R$T#nd:d' 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/07/21 16:22:48 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode27344 serial_number:*'\''d82cr{FBK-2R$T#nd:d], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN *'\''d82cr{FBK-2R$T#nd:d 00:07:30.169 request: 00:07:30.169 { 00:07:30.169 "method": "nvmf_create_subsystem", 00:07:30.169 "params": { 00:07:30.169 "nqn": "nqn.2016-06.io.spdk:cnode27344", 00:07:30.169 "serial_number": "*'\''d82cr{FBK-2R$T#nd:d" 00:07:30.169 } 00:07:30.169 } 00:07:30.169 Got JSON-RPC error response 00:07:30.169 GoRPCClient: error on JSON-RPC call' 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/07/21 16:22:48 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode27344 serial_number:*'d82cr{FBK-2R$T#nd:d], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN *'d82cr{FBK-2R$T#nd:d 00:07:30.169 request: 00:07:30.169 { 00:07:30.169 "method": "nvmf_create_subsystem", 00:07:30.169 "params": { 00:07:30.169 "nqn": "nqn.2016-06.io.spdk:cnode27344", 00:07:30.169 "serial_number": "*'d82cr{FBK-2R$T#nd:d" 00:07:30.169 } 00:07:30.169 } 00:07:30.169 Got JSON-RPC error response 00:07:30.169 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:07:30.169 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.170 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.428 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:07:30.428 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:07:30.428 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:07:30.428 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.428 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.428 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:07:30.428 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:07:30.428 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:07:30.428 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.428 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.428 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:07:30.428 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:07:30.428 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:07:30.428 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.428 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.428 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:07:30.428 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:07:30.428 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:07:30.428 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.428 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.428 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:07:30.428 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:07:30.428 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:07:30.428 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:30.428 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:30.428 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ z == \- ]] 00:07:30.428 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'z5q,KXHy=lqgSdl+Bgxs0DJ3/Y&nP#gt)sQqiYX/W' 00:07:30.428 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'z5q,KXHy=lqgSdl+Bgxs0DJ3/Y&nP#gt)sQqiYX/W' nqn.2016-06.io.spdk:cnode30687 00:07:30.686 [2024-07-21 16:22:48.683283] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30687: invalid model number 'z5q,KXHy=lqgSdl+Bgxs0DJ3/Y&nP#gt)sQqiYX/W' 00:07:30.686 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/07/21 16:22:48 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:z5q,KXHy=lqgSdl+Bgxs0DJ3/Y&nP#gt)sQqiYX/W nqn:nqn.2016-06.io.spdk:cnode30687], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN z5q,KXHy=lqgSdl+Bgxs0DJ3/Y&nP#gt)sQqiYX/W 00:07:30.686 request: 00:07:30.686 { 00:07:30.686 "method": "nvmf_create_subsystem", 00:07:30.686 "params": { 00:07:30.686 "nqn": "nqn.2016-06.io.spdk:cnode30687", 00:07:30.686 "model_number": "z5q,KXHy=lqgSdl+Bgxs0DJ3/Y&nP#gt)sQqiYX/W" 00:07:30.686 } 00:07:30.686 } 00:07:30.686 Got JSON-RPC error response 00:07:30.686 GoRPCClient: error on JSON-RPC call' 00:07:30.686 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/07/21 16:22:48 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:z5q,KXHy=lqgSdl+Bgxs0DJ3/Y&nP#gt)sQqiYX/W nqn:nqn.2016-06.io.spdk:cnode30687], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN z5q,KXHy=lqgSdl+Bgxs0DJ3/Y&nP#gt)sQqiYX/W 00:07:30.686 request: 00:07:30.686 { 00:07:30.686 "method": "nvmf_create_subsystem", 00:07:30.686 "params": { 00:07:30.686 "nqn": "nqn.2016-06.io.spdk:cnode30687", 00:07:30.686 "model_number": "z5q,KXHy=lqgSdl+Bgxs0DJ3/Y&nP#gt)sQqiYX/W" 00:07:30.686 } 00:07:30.686 } 00:07:30.686 Got JSON-RPC error response 00:07:30.686 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:30.686 16:22:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:07:30.943 [2024-07-21 16:22:48.999846] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.943 16:22:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:07:31.201 16:22:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:07:31.201 16:22:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:07:31.201 16:22:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:07:31.201 16:22:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:07:31.201 16:22:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:07:31.459 [2024-07-21 16:22:49.615144] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:07:31.459 16:22:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/07/21 16:22:49 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:07:31.459 request: 00:07:31.459 { 00:07:31.459 "method": "nvmf_subsystem_remove_listener", 00:07:31.459 "params": { 00:07:31.459 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:31.459 "listen_address": { 00:07:31.459 "trtype": "tcp", 00:07:31.459 "traddr": "", 00:07:31.459 "trsvcid": "4421" 00:07:31.459 } 00:07:31.459 } 00:07:31.459 } 00:07:31.459 Got JSON-RPC error response 00:07:31.459 GoRPCClient: error on JSON-RPC call' 00:07:31.459 16:22:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/07/21 16:22:49 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:07:31.459 request: 00:07:31.459 { 00:07:31.459 "method": "nvmf_subsystem_remove_listener", 00:07:31.459 "params": { 00:07:31.459 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:31.459 "listen_address": { 00:07:31.459 "trtype": "tcp", 00:07:31.459 "traddr": "", 00:07:31.459 "trsvcid": "4421" 00:07:31.459 } 00:07:31.459 } 00:07:31.459 } 00:07:31.459 Got JSON-RPC error response 00:07:31.459 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:07:31.459 16:22:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17025 -i 0 00:07:31.717 [2024-07-21 16:22:49.903544] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17025: invalid cntlid range [0-65519] 00:07:31.975 16:22:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/07/21 16:22:49 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode17025], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:07:31.975 request: 00:07:31.975 { 00:07:31.975 "method": "nvmf_create_subsystem", 00:07:31.975 "params": { 00:07:31.975 "nqn": "nqn.2016-06.io.spdk:cnode17025", 00:07:31.975 "min_cntlid": 0 00:07:31.975 } 00:07:31.975 } 00:07:31.975 Got JSON-RPC error response 00:07:31.975 GoRPCClient: error on JSON-RPC call' 00:07:31.975 16:22:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/07/21 16:22:49 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode17025], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:07:31.975 request: 00:07:31.975 { 00:07:31.975 "method": "nvmf_create_subsystem", 00:07:31.975 "params": { 00:07:31.975 "nqn": "nqn.2016-06.io.spdk:cnode17025", 00:07:31.975 "min_cntlid": 0 00:07:31.975 } 00:07:31.975 } 00:07:31.975 Got JSON-RPC error response 00:07:31.975 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:31.975 16:22:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21824 -i 65520 00:07:32.233 [2024-07-21 16:22:50.207970] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21824: invalid cntlid range [65520-65519] 00:07:32.233 16:22:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/07/21 16:22:50 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode21824], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:07:32.233 request: 00:07:32.233 { 00:07:32.233 "method": "nvmf_create_subsystem", 00:07:32.233 "params": { 00:07:32.233 "nqn": "nqn.2016-06.io.spdk:cnode21824", 00:07:32.233 "min_cntlid": 65520 00:07:32.233 } 00:07:32.233 } 00:07:32.233 Got JSON-RPC error response 00:07:32.233 GoRPCClient: error on JSON-RPC call' 00:07:32.233 16:22:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/07/21 16:22:50 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode21824], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:07:32.233 request: 00:07:32.233 { 00:07:32.233 "method": "nvmf_create_subsystem", 00:07:32.233 "params": { 00:07:32.233 "nqn": "nqn.2016-06.io.spdk:cnode21824", 00:07:32.233 "min_cntlid": 65520 00:07:32.233 } 00:07:32.233 } 00:07:32.233 Got JSON-RPC error response 00:07:32.233 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:32.233 16:22:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6989 -I 0 00:07:32.492 [2024-07-21 16:22:50.504402] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6989: invalid cntlid range [1-0] 00:07:32.492 16:22:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/07/21 16:22:50 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode6989], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:07:32.492 request: 00:07:32.492 { 00:07:32.492 "method": "nvmf_create_subsystem", 00:07:32.492 "params": { 00:07:32.492 "nqn": "nqn.2016-06.io.spdk:cnode6989", 00:07:32.492 "max_cntlid": 0 00:07:32.492 } 00:07:32.492 } 00:07:32.492 Got JSON-RPC error response 00:07:32.492 GoRPCClient: error on JSON-RPC call' 00:07:32.493 16:22:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/07/21 16:22:50 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode6989], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:07:32.493 request: 00:07:32.493 { 00:07:32.493 "method": "nvmf_create_subsystem", 00:07:32.493 "params": { 00:07:32.493 "nqn": "nqn.2016-06.io.spdk:cnode6989", 00:07:32.493 "max_cntlid": 0 00:07:32.493 } 00:07:32.493 } 00:07:32.493 Got JSON-RPC error response 00:07:32.493 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:32.493 16:22:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30555 -I 65520 00:07:32.751 [2024-07-21 16:22:50.764785] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30555: invalid cntlid range [1-65520] 00:07:32.751 16:22:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/07/21 16:22:50 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode30555], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:07:32.751 request: 00:07:32.751 { 00:07:32.751 "method": "nvmf_create_subsystem", 00:07:32.751 "params": { 00:07:32.751 "nqn": "nqn.2016-06.io.spdk:cnode30555", 00:07:32.751 "max_cntlid": 65520 00:07:32.751 } 00:07:32.751 } 00:07:32.751 Got JSON-RPC error response 00:07:32.751 GoRPCClient: error on JSON-RPC call' 00:07:32.751 16:22:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/07/21 16:22:50 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode30555], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:07:32.751 request: 00:07:32.751 { 00:07:32.751 "method": "nvmf_create_subsystem", 00:07:32.751 "params": { 00:07:32.751 "nqn": "nqn.2016-06.io.spdk:cnode30555", 00:07:32.751 "max_cntlid": 65520 00:07:32.751 } 00:07:32.751 } 00:07:32.751 Got JSON-RPC error response 00:07:32.751 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:32.751 16:22:50 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8340 -i 6 -I 5 00:07:33.009 [2024-07-21 16:22:51.013168] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8340: invalid cntlid range [6-5] 00:07:33.009 16:22:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/07/21 16:22:51 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode8340], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:07:33.009 request: 00:07:33.009 { 00:07:33.009 "method": "nvmf_create_subsystem", 00:07:33.009 "params": { 00:07:33.009 "nqn": "nqn.2016-06.io.spdk:cnode8340", 00:07:33.009 "min_cntlid": 6, 00:07:33.009 "max_cntlid": 5 00:07:33.009 } 00:07:33.009 } 00:07:33.009 Got JSON-RPC error response 00:07:33.009 GoRPCClient: error on JSON-RPC call' 00:07:33.009 16:22:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/07/21 16:22:51 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode8340], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:07:33.009 request: 00:07:33.009 { 00:07:33.009 "method": "nvmf_create_subsystem", 00:07:33.009 "params": { 00:07:33.009 "nqn": "nqn.2016-06.io.spdk:cnode8340", 00:07:33.009 "min_cntlid": 6, 00:07:33.009 "max_cntlid": 5 00:07:33.009 } 00:07:33.009 } 00:07:33.009 Got JSON-RPC error response 00:07:33.009 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:33.009 16:22:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:07:33.009 16:22:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:07:33.009 { 00:07:33.009 "name": "foobar", 00:07:33.009 "method": "nvmf_delete_target", 00:07:33.009 "req_id": 1 00:07:33.009 } 00:07:33.009 Got JSON-RPC error response 00:07:33.009 response: 00:07:33.009 { 00:07:33.009 "code": -32602, 00:07:33.009 "message": "The specified target doesn'\''t exist, cannot delete it." 00:07:33.009 }' 00:07:33.009 16:22:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:07:33.009 { 00:07:33.009 "name": "foobar", 00:07:33.009 "method": "nvmf_delete_target", 00:07:33.009 "req_id": 1 00:07:33.009 } 00:07:33.009 Got JSON-RPC error response 00:07:33.009 response: 00:07:33.009 { 00:07:33.009 "code": -32602, 00:07:33.009 "message": "The specified target doesn't exist, cannot delete it." 00:07:33.009 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:07:33.009 16:22:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:07:33.009 16:22:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:07:33.009 16:22:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:33.009 16:22:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:07:33.009 16:22:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:33.009 16:22:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:07:33.009 16:22:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:33.009 16:22:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:33.009 rmmod nvme_tcp 00:07:33.267 rmmod nvme_fabrics 00:07:33.267 rmmod nvme_keyring 00:07:33.267 16:22:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:33.267 16:22:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:07:33.268 16:22:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:07:33.268 16:22:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 67964 ']' 00:07:33.268 16:22:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 67964 00:07:33.268 16:22:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 67964 ']' 00:07:33.268 16:22:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 67964 00:07:33.268 16:22:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:07:33.268 16:22:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:33.268 16:22:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67964 00:07:33.268 killing process with pid 67964 00:07:33.268 16:22:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:33.268 16:22:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:33.268 16:22:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67964' 00:07:33.268 16:22:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 67964 00:07:33.268 16:22:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 67964 00:07:33.527 16:22:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:33.527 16:22:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:33.527 16:22:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:33.527 16:22:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:33.527 16:22:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:33.527 16:22:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.527 16:22:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.527 16:22:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.527 16:22:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:33.527 00:07:33.527 real 0m6.271s 00:07:33.527 user 0m25.270s 00:07:33.527 sys 0m1.449s 00:07:33.527 16:22:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.527 16:22:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:33.527 ************************************ 00:07:33.527 END TEST nvmf_invalid 00:07:33.527 ************************************ 00:07:33.527 16:22:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:33.527 16:22:51 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:33.527 16:22:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:33.527 16:22:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.527 16:22:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:33.527 ************************************ 00:07:33.527 START TEST nvmf_abort 00:07:33.527 ************************************ 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:33.527 * Looking for test storage... 00:07:33.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:33.527 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:33.786 Cannot find device "nvmf_tgt_br" 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # true 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:33.786 Cannot find device "nvmf_tgt_br2" 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # true 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:33.786 Cannot find device "nvmf_tgt_br" 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # true 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:33.786 Cannot find device "nvmf_tgt_br2" 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # true 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:33.786 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # true 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:33.786 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # true 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:33.786 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:34.045 16:22:51 nvmf_tcp.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:34.045 16:22:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:34.045 16:22:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:34.045 16:22:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:34.045 16:22:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:34.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:07:34.045 00:07:34.045 --- 10.0.0.2 ping statistics --- 00:07:34.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.045 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:07:34.045 16:22:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:34.045 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:34.045 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:07:34.045 00:07:34.045 --- 10.0.0.3 ping statistics --- 00:07:34.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.045 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:07:34.045 16:22:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:34.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:07:34.045 00:07:34.045 --- 10.0.0.1 ping statistics --- 00:07:34.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.045 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:07:34.045 16:22:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.045 16:22:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:07:34.045 16:22:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:34.045 16:22:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.045 16:22:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:34.045 16:22:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:34.045 16:22:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.045 16:22:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:34.045 16:22:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:34.045 16:22:52 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:34.045 16:22:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:34.045 16:22:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:34.045 16:22:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.045 16:22:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=68476 00:07:34.045 16:22:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:34.045 16:22:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 68476 00:07:34.045 16:22:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 68476 ']' 00:07:34.045 16:22:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.045 16:22:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:34.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.045 16:22:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.045 16:22:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:34.045 16:22:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.045 [2024-07-21 16:22:52.126940] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:34.045 [2024-07-21 16:22:52.127062] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.302 [2024-07-21 16:22:52.270942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:34.302 [2024-07-21 16:22:52.439372] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.302 [2024-07-21 16:22:52.439452] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.302 [2024-07-21 16:22:52.439467] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.302 [2024-07-21 16:22:52.439477] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.302 [2024-07-21 16:22:52.439487] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.302 [2024-07-21 16:22:52.439618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.302 [2024-07-21 16:22:52.439766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.302 [2024-07-21 16:22:52.440190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:35.236 [2024-07-21 16:22:53.135398] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:35.236 Malloc0 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:35.236 Delay0 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:35.236 [2024-07-21 16:22:53.216082] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.236 16:22:53 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:35.236 [2024-07-21 16:22:53.406830] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:37.764 Initializing NVMe Controllers 00:07:37.764 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:37.764 controller IO queue size 128 less than required 00:07:37.764 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:37.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:37.764 Initialization complete. Launching workers. 00:07:37.764 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33153 00:07:37.764 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33214, failed to submit 62 00:07:37.764 success 33157, unsuccess 57, failed 0 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:37.764 rmmod nvme_tcp 00:07:37.764 rmmod nvme_fabrics 00:07:37.764 rmmod nvme_keyring 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 68476 ']' 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 68476 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 68476 ']' 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 68476 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68476 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:37.764 killing process with pid 68476 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68476' 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 68476 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 68476 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:37.764 16:22:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.024 16:22:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:38.024 ************************************ 00:07:38.024 END TEST nvmf_abort 00:07:38.024 ************************************ 00:07:38.024 00:07:38.024 real 0m4.395s 00:07:38.024 user 0m12.329s 00:07:38.024 sys 0m1.084s 00:07:38.024 16:22:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.024 16:22:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:38.024 16:22:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:38.024 16:22:56 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:38.024 16:22:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:38.024 16:22:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.024 16:22:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:38.024 ************************************ 00:07:38.024 START TEST nvmf_ns_hotplug_stress 00:07:38.024 ************************************ 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:38.024 * Looking for test storage... 00:07:38.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:38.024 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:38.025 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:38.025 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:38.025 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:38.025 Cannot find device "nvmf_tgt_br" 00:07:38.025 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:07:38.025 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:38.025 Cannot find device "nvmf_tgt_br2" 00:07:38.025 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:07:38.025 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:38.025 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:38.025 Cannot find device "nvmf_tgt_br" 00:07:38.025 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:07:38.025 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:38.025 Cannot find device "nvmf_tgt_br2" 00:07:38.025 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:07:38.025 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:38.283 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:38.283 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:38.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:38.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:07:38.283 00:07:38.283 --- 10.0.0.2 ping statistics --- 00:07:38.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.283 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:38.283 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:38.283 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:07:38.283 00:07:38.283 --- 10.0.0.3 ping statistics --- 00:07:38.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.283 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:38.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:38.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:07:38.283 00:07:38.283 --- 10.0.0.1 ping statistics --- 00:07:38.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.283 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:38.283 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:38.541 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:38.541 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:38.541 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:38.541 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:38.541 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=68741 00:07:38.541 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 68741 00:07:38.541 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 68741 ']' 00:07:38.541 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.541 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:38.541 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:38.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.541 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.541 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:38.541 16:22:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:38.541 [2024-07-21 16:22:56.568964] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:07:38.541 [2024-07-21 16:22:56.569064] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:38.541 [2024-07-21 16:22:56.706931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:38.799 [2024-07-21 16:22:56.826520] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:38.799 [2024-07-21 16:22:56.826848] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:38.799 [2024-07-21 16:22:56.827029] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:38.799 [2024-07-21 16:22:56.827295] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:38.799 [2024-07-21 16:22:56.827547] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:38.799 [2024-07-21 16:22:56.827902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.799 [2024-07-21 16:22:56.827979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:38.799 [2024-07-21 16:22:56.827985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.731 16:22:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:39.731 16:22:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:07:39.731 16:22:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:39.731 16:22:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:39.731 16:22:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:39.731 16:22:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:39.731 16:22:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:39.731 16:22:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:39.731 [2024-07-21 16:22:57.880515] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.731 16:22:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:39.989 16:22:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:40.245 [2024-07-21 16:22:58.450904] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:40.502 16:22:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:40.759 16:22:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:41.016 Malloc0 00:07:41.016 16:22:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:41.273 Delay0 00:07:41.273 16:22:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.530 16:22:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:41.789 NULL1 00:07:41.789 16:22:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:42.046 16:23:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=68872 00:07:42.046 16:23:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:42.046 16:23:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:07:42.046 16:23:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.443 Read completed with error (sct=0, sc=11) 00:07:43.443 16:23:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.443 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.443 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.443 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.443 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.443 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.443 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.443 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.443 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.443 16:23:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:43.443 16:23:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:43.700 true 00:07:43.700 16:23:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:07:43.700 16:23:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.632 16:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.890 16:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:44.890 16:23:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:45.146 true 00:07:45.146 16:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:07:45.146 16:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.404 16:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.661 16:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:45.661 16:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:45.661 true 00:07:45.661 16:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:07:45.661 16:23:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.595 16:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.854 16:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:46.854 16:23:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:47.113 true 00:07:47.113 16:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:07:47.113 16:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.371 16:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.628 16:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:47.628 16:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:47.886 true 00:07:47.886 16:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:07:47.886 16:23:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.144 16:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.402 16:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:48.402 16:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:48.660 true 00:07:48.660 16:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:07:48.660 16:23:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.594 16:23:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.852 16:23:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:49.852 16:23:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:50.111 true 00:07:50.111 16:23:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:07:50.111 16:23:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.370 16:23:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.629 16:23:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:50.629 16:23:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:50.887 true 00:07:50.887 16:23:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:07:50.887 16:23:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.146 16:23:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.404 16:23:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:51.404 16:23:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:51.675 true 00:07:51.675 16:23:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:07:51.675 16:23:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.607 16:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.865 16:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:52.865 16:23:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:53.121 true 00:07:53.121 16:23:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:07:53.121 16:23:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.378 16:23:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.636 16:23:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:53.636 16:23:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:53.894 true 00:07:53.894 16:23:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:07:53.894 16:23:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.153 16:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.411 16:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:54.411 16:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:54.667 true 00:07:54.667 16:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:07:54.667 16:23:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.622 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.622 16:23:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.622 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.622 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.622 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.884 16:23:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:55.884 16:23:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:56.142 true 00:07:56.142 16:23:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:07:56.142 16:23:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.076 16:23:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.076 16:23:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:57.076 16:23:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:57.333 true 00:07:57.333 16:23:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:07:57.333 16:23:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.591 16:23:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.848 16:23:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:57.848 16:23:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:58.104 true 00:07:58.104 16:23:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:07:58.104 16:23:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.361 16:23:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.619 16:23:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:58.619 16:23:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:58.876 true 00:07:58.876 16:23:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:07:58.876 16:23:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.821 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.822 16:23:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:59.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.079 16:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:00.079 16:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:00.338 true 00:08:00.338 16:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:08:00.338 16:23:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.271 16:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.271 16:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:01.271 16:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:01.529 true 00:08:01.529 16:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:08:01.529 16:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.786 16:23:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.044 16:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:02.044 16:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:02.301 true 00:08:02.302 16:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:08:02.302 16:23:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.236 16:23:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.494 16:23:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:03.494 16:23:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:03.494 true 00:08:03.494 16:23:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:08:03.494 16:23:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.751 16:23:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.009 16:23:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:04.009 16:23:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:04.266 true 00:08:04.266 16:23:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:08:04.266 16:23:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.200 16:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.459 16:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:05.459 16:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:05.716 true 00:08:05.716 16:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:08:05.716 16:23:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.973 16:23:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.230 16:23:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:06.230 16:23:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:06.487 true 00:08:06.487 16:23:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:08:06.487 16:23:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.744 16:23:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.002 16:23:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:07.002 16:23:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:07.261 true 00:08:07.261 16:23:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:08:07.261 16:23:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.197 16:23:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.467 16:23:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:08.467 16:23:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:08.742 true 00:08:08.742 16:23:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:08:08.743 16:23:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.001 16:23:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.259 16:23:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:09.259 16:23:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:09.517 true 00:08:09.517 16:23:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:08:09.517 16:23:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.775 16:23:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.033 16:23:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:10.033 16:23:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:10.291 true 00:08:10.291 16:23:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:08:10.291 16:23:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.225 16:23:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.484 16:23:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:11.484 16:23:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:11.742 true 00:08:11.742 16:23:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:08:11.742 16:23:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.000 16:23:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.259 Initializing NVMe Controllers 00:08:12.259 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:12.259 Controller IO queue size 128, less than required. 00:08:12.259 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:12.259 Controller IO queue size 128, less than required. 00:08:12.259 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:12.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:12.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:12.259 Initialization complete. Launching workers. 00:08:12.259 ======================================================== 00:08:12.259 Latency(us) 00:08:12.259 Device Information : IOPS MiB/s Average min max 00:08:12.259 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 783.26 0.38 79440.33 2921.35 1018657.35 00:08:12.259 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10179.45 4.97 12573.72 3431.39 633923.78 00:08:12.259 ======================================================== 00:08:12.259 Total : 10962.71 5.35 17351.20 2921.35 1018657.35 00:08:12.259 00:08:12.259 16:23:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:12.259 16:23:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:12.517 true 00:08:12.775 16:23:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68872 00:08:12.775 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (68872) - No such process 00:08:12.775 16:23:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 68872 00:08:12.775 16:23:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.034 16:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:13.291 16:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:13.291 16:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:13.291 16:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:13.291 16:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:13.291 16:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:13.549 null0 00:08:13.549 16:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:13.549 16:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:13.549 16:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:13.822 null1 00:08:13.822 16:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:13.822 16:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:13.822 16:23:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:14.079 null2 00:08:14.079 16:23:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:14.079 16:23:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:14.079 16:23:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:14.336 null3 00:08:14.336 16:23:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:14.336 16:23:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:14.336 16:23:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:14.593 null4 00:08:14.593 16:23:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:14.593 16:23:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:14.593 16:23:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:14.850 null5 00:08:14.850 16:23:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:14.850 16:23:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:14.850 16:23:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:15.107 null6 00:08:15.108 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:15.108 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:15.108 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:15.365 null7 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 69911 69913 69915 69917 69919 69920 69922 69923 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.365 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:15.366 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:15.366 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:15.366 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:15.366 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.366 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:15.623 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.623 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:15.623 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.623 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:15.623 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:15.623 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:15.879 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:15.879 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:15.879 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.879 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.879 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:15.879 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.879 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.879 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:15.879 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.879 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.879 16:23:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:15.879 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.879 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.879 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:15.879 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.879 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.879 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:15.879 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.879 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.879 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:16.137 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.137 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.137 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:16.137 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.137 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.137 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:16.137 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.137 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:16.137 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.137 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:16.137 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:16.137 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:16.394 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:16.394 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:16.394 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.394 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.394 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:16.394 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.394 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.394 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:16.394 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.394 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.394 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:16.394 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.394 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.394 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:16.394 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.394 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.394 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:16.650 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.650 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.650 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:16.650 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.650 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.650 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:16.650 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.650 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.651 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:16.651 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.651 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:16.651 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:16.651 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:16.907 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:16.907 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:16.907 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:16.907 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.907 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.907 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.907 16:23:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:16.907 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.907 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.907 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:16.907 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.907 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.907 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:16.907 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.907 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.907 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:16.907 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.907 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.907 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:17.164 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.164 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.164 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:17.164 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.164 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.164 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:17.164 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.164 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.164 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:17.164 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:17.164 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:17.164 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:17.164 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:17.422 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:17.422 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:17.422 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:17.422 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.422 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.422 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:17.422 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.422 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.422 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:17.422 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.422 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.422 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:17.422 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.422 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.422 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.422 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:17.679 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.679 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.679 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:17.679 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.679 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.679 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:17.679 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.679 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.679 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:17.680 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:17.680 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:17.680 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.680 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.680 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:17.680 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:17.938 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:17.938 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:17.938 16:23:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:17.938 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:17.938 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.938 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.938 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:17.938 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.938 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.938 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:17.938 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.938 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.938 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.938 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:18.196 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.196 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.196 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:18.196 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.196 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.196 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:18.196 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.196 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.196 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:18.196 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.196 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.196 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:18.196 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:18.196 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.196 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.196 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:18.196 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:18.454 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:18.454 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:18.454 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:18.454 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:18.454 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.454 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.454 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:18.454 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:18.454 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.714 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.714 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.714 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:18.714 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.714 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.714 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:18.714 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.714 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.714 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:18.714 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.714 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.714 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.714 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:18.714 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.714 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:18.714 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:18.714 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.714 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.714 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:18.714 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.714 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.714 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:18.972 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:18.972 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:18.972 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:18.972 16:23:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:18.972 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:18.972 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.972 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.972 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:18.972 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:18.972 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.231 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.232 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.232 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:19.232 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.232 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.232 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:19.232 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.232 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.232 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:19.232 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.232 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.232 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:19.232 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:19.232 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.232 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.232 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:19.232 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.232 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.232 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:19.232 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.232 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.232 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:19.232 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:19.490 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:19.490 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:19.490 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:19.490 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:19.490 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.490 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.490 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:19.490 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.490 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:19.749 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.749 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.749 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:19.749 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.749 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.749 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:19.749 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.749 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.749 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:19.749 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.749 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.749 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:19.749 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.749 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.749 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:19.749 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:19.749 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.749 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.749 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:19.749 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.749 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.749 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:20.007 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:20.007 16:23:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:20.007 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:20.008 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:20.008 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:20.008 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.008 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.008 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:20.266 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:20.266 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.266 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.266 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.266 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:20.266 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.266 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.266 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:20.266 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.266 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.266 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:20.266 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.266 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.266 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:20.266 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.266 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.266 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:20.266 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:20.525 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.525 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.525 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:20.525 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:20.525 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.525 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.525 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:20.525 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:20.525 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:20.525 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:20.525 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:20.783 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.783 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.783 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.783 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.783 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:20.783 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.783 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.783 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.783 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.783 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.783 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.783 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.783 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.783 16:23:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.040 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.040 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.040 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.040 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.040 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:21.040 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:21.040 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:21.040 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:21.040 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:21.040 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:21.040 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:21.040 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:21.040 rmmod nvme_tcp 00:08:21.040 rmmod nvme_fabrics 00:08:21.040 rmmod nvme_keyring 00:08:21.040 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:21.040 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:21.040 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:21.040 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 68741 ']' 00:08:21.040 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 68741 00:08:21.040 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 68741 ']' 00:08:21.040 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 68741 00:08:21.040 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:08:21.040 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:21.040 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68741 00:08:21.040 killing process with pid 68741 00:08:21.040 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:21.040 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:21.040 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68741' 00:08:21.040 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 68741 00:08:21.040 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 68741 00:08:21.297 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:21.297 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:21.297 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:21.297 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:21.297 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:21.297 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.297 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:21.297 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.556 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:21.556 00:08:21.556 real 0m43.464s 00:08:21.556 user 3m28.633s 00:08:21.556 sys 0m12.980s 00:08:21.556 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.556 16:23:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:21.556 ************************************ 00:08:21.556 END TEST nvmf_ns_hotplug_stress 00:08:21.556 ************************************ 00:08:21.556 16:23:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:21.556 16:23:39 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:21.556 16:23:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:21.556 16:23:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.556 16:23:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:21.556 ************************************ 00:08:21.556 START TEST nvmf_connect_stress 00:08:21.556 ************************************ 00:08:21.556 16:23:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:21.556 * Looking for test storage... 00:08:21.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:21.556 16:23:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:21.556 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:08:21.556 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.556 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.556 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.556 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.556 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.556 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.556 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.556 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.556 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.556 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.556 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:08:21.556 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:08:21.556 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.556 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.556 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:21.556 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.556 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:21.556 16:23:39 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.556 16:23:39 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.556 16:23:39 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.556 16:23:39 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.556 16:23:39 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:21.557 Cannot find device "nvmf_tgt_br" 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:21.557 Cannot find device "nvmf_tgt_br2" 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:21.557 Cannot find device "nvmf_tgt_br" 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:21.557 Cannot find device "nvmf_tgt_br2" 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:08:21.557 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:21.815 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:21.815 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:21.815 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:21.815 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:08:21.815 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:21.815 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:21.815 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:08:21.815 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:21.815 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:21.815 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:21.815 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:21.815 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:21.815 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:21.815 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:21.815 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:21.815 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:21.815 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:21.815 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:21.815 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:21.815 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:21.815 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:21.815 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:21.815 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:21.815 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:21.815 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:21.815 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:21.815 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:21.815 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:21.815 16:23:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:21.815 16:23:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:21.815 16:23:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:21.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:08:21.815 00:08:21.815 --- 10.0.0.2 ping statistics --- 00:08:21.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.815 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:08:21.815 16:23:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:21.815 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:21.815 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:08:21.815 00:08:21.815 --- 10.0.0.3 ping statistics --- 00:08:21.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.815 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:08:21.815 16:23:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:21.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:08:21.815 00:08:21.815 --- 10.0.0.1 ping statistics --- 00:08:21.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.816 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:08:21.816 16:23:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.816 16:23:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:08:21.816 16:23:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:21.816 16:23:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.816 16:23:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:21.816 16:23:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:22.074 16:23:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:22.074 16:23:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:22.074 16:23:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:22.074 16:23:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:22.074 16:23:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:22.074 16:23:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:22.074 16:23:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:22.074 16:23:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=71213 00:08:22.074 16:23:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 71213 00:08:22.074 16:23:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:22.074 16:23:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 71213 ']' 00:08:22.074 16:23:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.074 16:23:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:22.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.074 16:23:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.074 16:23:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:22.074 16:23:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:22.074 [2024-07-21 16:23:40.115013] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:22.074 [2024-07-21 16:23:40.115094] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.074 [2024-07-21 16:23:40.257146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:22.332 [2024-07-21 16:23:40.405305] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.332 [2024-07-21 16:23:40.405387] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.332 [2024-07-21 16:23:40.405399] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.332 [2024-07-21 16:23:40.405408] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.332 [2024-07-21 16:23:40.405416] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.332 [2024-07-21 16:23:40.405594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.332 [2024-07-21 16:23:40.406398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:22.332 [2024-07-21 16:23:40.406417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.898 16:23:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:22.898 16:23:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:08:22.898 16:23:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:22.898 16:23:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:22.898 16:23:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:23.156 [2024-07-21 16:23:41.149059] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:23.156 [2024-07-21 16:23:41.169243] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:23.156 NULL1 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=71269 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.156 16:23:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:23.413 16:23:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.413 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:23.413 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:23.413 16:23:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.413 16:23:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:23.978 16:23:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.978 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:23.978 16:23:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:23.978 16:23:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.978 16:23:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:24.235 16:23:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.235 16:23:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:24.235 16:23:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:24.235 16:23:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.235 16:23:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:24.504 16:23:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.504 16:23:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:24.504 16:23:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:24.504 16:23:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.504 16:23:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:24.781 16:23:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.781 16:23:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:24.781 16:23:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:24.781 16:23:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.781 16:23:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.038 16:23:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.039 16:23:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:25.039 16:23:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:25.039 16:23:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.039 16:23:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.603 16:23:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.603 16:23:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:25.603 16:23:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:25.603 16:23:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.603 16:23:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:25.861 16:23:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.861 16:23:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:25.861 16:23:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:25.861 16:23:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.861 16:23:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.119 16:23:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.119 16:23:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:26.119 16:23:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:26.119 16:23:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.119 16:23:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.376 16:23:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.376 16:23:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:26.376 16:23:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:26.376 16:23:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.376 16:23:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.634 16:23:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.634 16:23:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:26.634 16:23:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:26.634 16:23:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.634 16:23:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.200 16:23:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.200 16:23:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:27.200 16:23:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.200 16:23:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.200 16:23:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.458 16:23:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.458 16:23:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:27.458 16:23:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.458 16:23:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.458 16:23:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.716 16:23:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.716 16:23:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:27.716 16:23:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.716 16:23:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.716 16:23:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:27.974 16:23:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.974 16:23:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:27.974 16:23:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:27.974 16:23:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.974 16:23:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.233 16:23:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.233 16:23:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:28.233 16:23:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:28.233 16:23:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.233 16:23:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.799 16:23:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.799 16:23:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:28.799 16:23:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:28.799 16:23:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.799 16:23:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.058 16:23:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.058 16:23:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:29.058 16:23:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.058 16:23:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.058 16:23:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.316 16:23:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.316 16:23:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:29.316 16:23:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.316 16:23:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.316 16:23:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.575 16:23:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.575 16:23:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:29.575 16:23:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.575 16:23:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.575 16:23:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.834 16:23:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.834 16:23:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:29.834 16:23:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.834 16:23:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.834 16:23:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.402 16:23:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.402 16:23:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:30.402 16:23:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.402 16:23:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.402 16:23:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.661 16:23:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.661 16:23:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:30.661 16:23:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.661 16:23:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.661 16:23:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.920 16:23:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.920 16:23:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:30.920 16:23:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.920 16:23:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.920 16:23:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:31.179 16:23:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.179 16:23:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:31.179 16:23:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:31.179 16:23:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.179 16:23:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:31.442 16:23:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.442 16:23:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:31.442 16:23:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:31.442 16:23:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.442 16:23:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.025 16:23:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.025 16:23:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:32.025 16:23:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:32.025 16:23:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.025 16:23:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.283 16:23:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.283 16:23:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:32.283 16:23:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:32.283 16:23:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.283 16:23:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.541 16:23:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.541 16:23:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:32.541 16:23:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:32.541 16:23:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.541 16:23:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.799 16:23:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.799 16:23:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:32.799 16:23:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:32.799 16:23:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.799 16:23:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:33.057 16:23:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.057 16:23:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:33.057 16:23:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:33.057 16:23:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.057 16:23:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:33.315 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:33.574 16:23:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.574 16:23:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71269 00:08:33.574 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (71269) - No such process 00:08:33.574 16:23:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 71269 00:08:33.574 16:23:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:08:33.574 16:23:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:33.574 16:23:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:33.574 16:23:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:33.574 16:23:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:08:33.574 16:23:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:33.574 16:23:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:08:33.574 16:23:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:33.574 16:23:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:33.574 rmmod nvme_tcp 00:08:33.574 rmmod nvme_fabrics 00:08:33.574 rmmod nvme_keyring 00:08:33.574 16:23:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:33.574 16:23:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:08:33.574 16:23:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:08:33.574 16:23:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 71213 ']' 00:08:33.574 16:23:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 71213 00:08:33.574 16:23:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 71213 ']' 00:08:33.574 16:23:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 71213 00:08:33.574 16:23:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:08:33.574 16:23:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:33.574 16:23:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71213 00:08:33.574 killing process with pid 71213 00:08:33.574 16:23:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:33.574 16:23:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:33.574 16:23:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71213' 00:08:33.574 16:23:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 71213 00:08:33.574 16:23:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 71213 00:08:33.832 16:23:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:33.832 16:23:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:33.832 16:23:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:33.832 16:23:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:33.832 16:23:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:33.832 16:23:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.832 16:23:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:33.832 16:23:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.102 16:23:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:34.102 00:08:34.102 real 0m12.491s 00:08:34.102 user 0m41.357s 00:08:34.102 sys 0m3.258s 00:08:34.102 16:23:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:34.102 ************************************ 00:08:34.103 END TEST nvmf_connect_stress 00:08:34.103 ************************************ 00:08:34.103 16:23:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.103 16:23:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:34.103 16:23:52 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:34.103 16:23:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:34.103 16:23:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.103 16:23:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:34.103 ************************************ 00:08:34.103 START TEST nvmf_fused_ordering 00:08:34.103 ************************************ 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:34.103 * Looking for test storage... 00:08:34.103 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:34.103 Cannot find device "nvmf_tgt_br" 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:34.103 Cannot find device "nvmf_tgt_br2" 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:34.103 Cannot find device "nvmf_tgt_br" 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:34.103 Cannot find device "nvmf_tgt_br2" 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:08:34.103 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:34.362 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:34.362 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:34.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:08:34.362 00:08:34.362 --- 10.0.0.2 ping statistics --- 00:08:34.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.362 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:34.362 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:34.362 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:08:34.362 00:08:34.362 --- 10.0.0.3 ping statistics --- 00:08:34.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.362 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:34.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:08:34.362 00:08:34.362 --- 10.0.0.1 ping statistics --- 00:08:34.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.362 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:34.362 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:34.620 16:23:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:08:34.620 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:34.620 16:23:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:34.620 16:23:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:34.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.620 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=71602 00:08:34.620 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 71602 00:08:34.620 16:23:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:34.620 16:23:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 71602 ']' 00:08:34.620 16:23:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.620 16:23:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:34.620 16:23:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.620 16:23:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:34.620 16:23:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:34.620 [2024-07-21 16:23:52.642233] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:34.620 [2024-07-21 16:23:52.642335] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.620 [2024-07-21 16:23:52.784997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.885 [2024-07-21 16:23:52.893120] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.885 [2024-07-21 16:23:52.893215] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.885 [2024-07-21 16:23:52.893230] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.885 [2024-07-21 16:23:52.893240] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.885 [2024-07-21 16:23:52.893249] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.885 [2024-07-21 16:23:52.893308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.819 16:23:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:35.819 16:23:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:08:35.819 16:23:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:35.819 16:23:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:35.819 16:23:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:35.820 16:23:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.820 16:23:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:35.820 16:23:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.820 16:23:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:35.820 [2024-07-21 16:23:53.706983] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.820 16:23:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.820 16:23:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:35.820 16:23:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.820 16:23:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:35.820 16:23:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.820 16:23:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:35.820 16:23:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.820 16:23:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:35.820 [2024-07-21 16:23:53.723021] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:35.820 16:23:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.820 16:23:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:35.820 16:23:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.820 16:23:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:35.820 NULL1 00:08:35.820 16:23:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.820 16:23:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:08:35.820 16:23:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.820 16:23:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:35.820 16:23:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.820 16:23:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:35.820 16:23:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.820 16:23:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:35.820 16:23:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.820 16:23:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:35.820 [2024-07-21 16:23:53.777619] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:35.820 [2024-07-21 16:23:53.777665] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71652 ] 00:08:36.078 Attached to nqn.2016-06.io.spdk:cnode1 00:08:36.078 Namespace ID: 1 size: 1GB 00:08:36.078 fused_ordering(0) 00:08:36.078 fused_ordering(1) 00:08:36.078 fused_ordering(2) 00:08:36.078 fused_ordering(3) 00:08:36.078 fused_ordering(4) 00:08:36.078 fused_ordering(5) 00:08:36.078 fused_ordering(6) 00:08:36.078 fused_ordering(7) 00:08:36.078 fused_ordering(8) 00:08:36.078 fused_ordering(9) 00:08:36.078 fused_ordering(10) 00:08:36.078 fused_ordering(11) 00:08:36.078 fused_ordering(12) 00:08:36.078 fused_ordering(13) 00:08:36.078 fused_ordering(14) 00:08:36.078 fused_ordering(15) 00:08:36.078 fused_ordering(16) 00:08:36.078 fused_ordering(17) 00:08:36.078 fused_ordering(18) 00:08:36.078 fused_ordering(19) 00:08:36.078 fused_ordering(20) 00:08:36.078 fused_ordering(21) 00:08:36.078 fused_ordering(22) 00:08:36.078 fused_ordering(23) 00:08:36.078 fused_ordering(24) 00:08:36.078 fused_ordering(25) 00:08:36.078 fused_ordering(26) 00:08:36.078 fused_ordering(27) 00:08:36.078 fused_ordering(28) 00:08:36.078 fused_ordering(29) 00:08:36.078 fused_ordering(30) 00:08:36.078 fused_ordering(31) 00:08:36.078 fused_ordering(32) 00:08:36.078 fused_ordering(33) 00:08:36.078 fused_ordering(34) 00:08:36.078 fused_ordering(35) 00:08:36.078 fused_ordering(36) 00:08:36.078 fused_ordering(37) 00:08:36.078 fused_ordering(38) 00:08:36.078 fused_ordering(39) 00:08:36.078 fused_ordering(40) 00:08:36.078 fused_ordering(41) 00:08:36.078 fused_ordering(42) 00:08:36.078 fused_ordering(43) 00:08:36.078 fused_ordering(44) 00:08:36.078 fused_ordering(45) 00:08:36.078 fused_ordering(46) 00:08:36.078 fused_ordering(47) 00:08:36.078 fused_ordering(48) 00:08:36.078 fused_ordering(49) 00:08:36.078 fused_ordering(50) 00:08:36.078 fused_ordering(51) 00:08:36.078 fused_ordering(52) 00:08:36.078 fused_ordering(53) 00:08:36.078 fused_ordering(54) 00:08:36.078 fused_ordering(55) 00:08:36.078 fused_ordering(56) 00:08:36.078 fused_ordering(57) 00:08:36.078 fused_ordering(58) 00:08:36.078 fused_ordering(59) 00:08:36.078 fused_ordering(60) 00:08:36.078 fused_ordering(61) 00:08:36.078 fused_ordering(62) 00:08:36.078 fused_ordering(63) 00:08:36.078 fused_ordering(64) 00:08:36.078 fused_ordering(65) 00:08:36.078 fused_ordering(66) 00:08:36.078 fused_ordering(67) 00:08:36.078 fused_ordering(68) 00:08:36.078 fused_ordering(69) 00:08:36.078 fused_ordering(70) 00:08:36.078 fused_ordering(71) 00:08:36.078 fused_ordering(72) 00:08:36.078 fused_ordering(73) 00:08:36.078 fused_ordering(74) 00:08:36.078 fused_ordering(75) 00:08:36.078 fused_ordering(76) 00:08:36.078 fused_ordering(77) 00:08:36.078 fused_ordering(78) 00:08:36.078 fused_ordering(79) 00:08:36.078 fused_ordering(80) 00:08:36.078 fused_ordering(81) 00:08:36.078 fused_ordering(82) 00:08:36.078 fused_ordering(83) 00:08:36.078 fused_ordering(84) 00:08:36.078 fused_ordering(85) 00:08:36.078 fused_ordering(86) 00:08:36.078 fused_ordering(87) 00:08:36.078 fused_ordering(88) 00:08:36.078 fused_ordering(89) 00:08:36.078 fused_ordering(90) 00:08:36.078 fused_ordering(91) 00:08:36.078 fused_ordering(92) 00:08:36.078 fused_ordering(93) 00:08:36.078 fused_ordering(94) 00:08:36.078 fused_ordering(95) 00:08:36.078 fused_ordering(96) 00:08:36.078 fused_ordering(97) 00:08:36.078 fused_ordering(98) 00:08:36.078 fused_ordering(99) 00:08:36.078 fused_ordering(100) 00:08:36.078 fused_ordering(101) 00:08:36.078 fused_ordering(102) 00:08:36.078 fused_ordering(103) 00:08:36.078 fused_ordering(104) 00:08:36.078 fused_ordering(105) 00:08:36.078 fused_ordering(106) 00:08:36.078 fused_ordering(107) 00:08:36.078 fused_ordering(108) 00:08:36.078 fused_ordering(109) 00:08:36.078 fused_ordering(110) 00:08:36.078 fused_ordering(111) 00:08:36.078 fused_ordering(112) 00:08:36.078 fused_ordering(113) 00:08:36.078 fused_ordering(114) 00:08:36.078 fused_ordering(115) 00:08:36.078 fused_ordering(116) 00:08:36.078 fused_ordering(117) 00:08:36.078 fused_ordering(118) 00:08:36.078 fused_ordering(119) 00:08:36.078 fused_ordering(120) 00:08:36.078 fused_ordering(121) 00:08:36.078 fused_ordering(122) 00:08:36.078 fused_ordering(123) 00:08:36.078 fused_ordering(124) 00:08:36.078 fused_ordering(125) 00:08:36.078 fused_ordering(126) 00:08:36.078 fused_ordering(127) 00:08:36.078 fused_ordering(128) 00:08:36.078 fused_ordering(129) 00:08:36.078 fused_ordering(130) 00:08:36.078 fused_ordering(131) 00:08:36.078 fused_ordering(132) 00:08:36.078 fused_ordering(133) 00:08:36.078 fused_ordering(134) 00:08:36.078 fused_ordering(135) 00:08:36.078 fused_ordering(136) 00:08:36.079 fused_ordering(137) 00:08:36.079 fused_ordering(138) 00:08:36.079 fused_ordering(139) 00:08:36.079 fused_ordering(140) 00:08:36.079 fused_ordering(141) 00:08:36.079 fused_ordering(142) 00:08:36.079 fused_ordering(143) 00:08:36.079 fused_ordering(144) 00:08:36.079 fused_ordering(145) 00:08:36.079 fused_ordering(146) 00:08:36.079 fused_ordering(147) 00:08:36.079 fused_ordering(148) 00:08:36.079 fused_ordering(149) 00:08:36.079 fused_ordering(150) 00:08:36.079 fused_ordering(151) 00:08:36.079 fused_ordering(152) 00:08:36.079 fused_ordering(153) 00:08:36.079 fused_ordering(154) 00:08:36.079 fused_ordering(155) 00:08:36.079 fused_ordering(156) 00:08:36.079 fused_ordering(157) 00:08:36.079 fused_ordering(158) 00:08:36.079 fused_ordering(159) 00:08:36.079 fused_ordering(160) 00:08:36.079 fused_ordering(161) 00:08:36.079 fused_ordering(162) 00:08:36.079 fused_ordering(163) 00:08:36.079 fused_ordering(164) 00:08:36.079 fused_ordering(165) 00:08:36.079 fused_ordering(166) 00:08:36.079 fused_ordering(167) 00:08:36.079 fused_ordering(168) 00:08:36.079 fused_ordering(169) 00:08:36.079 fused_ordering(170) 00:08:36.079 fused_ordering(171) 00:08:36.079 fused_ordering(172) 00:08:36.079 fused_ordering(173) 00:08:36.079 fused_ordering(174) 00:08:36.079 fused_ordering(175) 00:08:36.079 fused_ordering(176) 00:08:36.079 fused_ordering(177) 00:08:36.079 fused_ordering(178) 00:08:36.079 fused_ordering(179) 00:08:36.079 fused_ordering(180) 00:08:36.079 fused_ordering(181) 00:08:36.079 fused_ordering(182) 00:08:36.079 fused_ordering(183) 00:08:36.079 fused_ordering(184) 00:08:36.079 fused_ordering(185) 00:08:36.079 fused_ordering(186) 00:08:36.079 fused_ordering(187) 00:08:36.079 fused_ordering(188) 00:08:36.079 fused_ordering(189) 00:08:36.079 fused_ordering(190) 00:08:36.079 fused_ordering(191) 00:08:36.079 fused_ordering(192) 00:08:36.079 fused_ordering(193) 00:08:36.079 fused_ordering(194) 00:08:36.079 fused_ordering(195) 00:08:36.079 fused_ordering(196) 00:08:36.079 fused_ordering(197) 00:08:36.079 fused_ordering(198) 00:08:36.079 fused_ordering(199) 00:08:36.079 fused_ordering(200) 00:08:36.079 fused_ordering(201) 00:08:36.079 fused_ordering(202) 00:08:36.079 fused_ordering(203) 00:08:36.079 fused_ordering(204) 00:08:36.079 fused_ordering(205) 00:08:36.644 fused_ordering(206) 00:08:36.644 fused_ordering(207) 00:08:36.644 fused_ordering(208) 00:08:36.644 fused_ordering(209) 00:08:36.644 fused_ordering(210) 00:08:36.644 fused_ordering(211) 00:08:36.644 fused_ordering(212) 00:08:36.644 fused_ordering(213) 00:08:36.644 fused_ordering(214) 00:08:36.644 fused_ordering(215) 00:08:36.644 fused_ordering(216) 00:08:36.644 fused_ordering(217) 00:08:36.644 fused_ordering(218) 00:08:36.644 fused_ordering(219) 00:08:36.644 fused_ordering(220) 00:08:36.644 fused_ordering(221) 00:08:36.644 fused_ordering(222) 00:08:36.644 fused_ordering(223) 00:08:36.644 fused_ordering(224) 00:08:36.644 fused_ordering(225) 00:08:36.644 fused_ordering(226) 00:08:36.644 fused_ordering(227) 00:08:36.644 fused_ordering(228) 00:08:36.644 fused_ordering(229) 00:08:36.644 fused_ordering(230) 00:08:36.644 fused_ordering(231) 00:08:36.644 fused_ordering(232) 00:08:36.644 fused_ordering(233) 00:08:36.644 fused_ordering(234) 00:08:36.644 fused_ordering(235) 00:08:36.644 fused_ordering(236) 00:08:36.644 fused_ordering(237) 00:08:36.644 fused_ordering(238) 00:08:36.644 fused_ordering(239) 00:08:36.644 fused_ordering(240) 00:08:36.644 fused_ordering(241) 00:08:36.644 fused_ordering(242) 00:08:36.644 fused_ordering(243) 00:08:36.644 fused_ordering(244) 00:08:36.644 fused_ordering(245) 00:08:36.644 fused_ordering(246) 00:08:36.644 fused_ordering(247) 00:08:36.644 fused_ordering(248) 00:08:36.644 fused_ordering(249) 00:08:36.644 fused_ordering(250) 00:08:36.644 fused_ordering(251) 00:08:36.644 fused_ordering(252) 00:08:36.644 fused_ordering(253) 00:08:36.644 fused_ordering(254) 00:08:36.644 fused_ordering(255) 00:08:36.644 fused_ordering(256) 00:08:36.644 fused_ordering(257) 00:08:36.644 fused_ordering(258) 00:08:36.644 fused_ordering(259) 00:08:36.644 fused_ordering(260) 00:08:36.644 fused_ordering(261) 00:08:36.644 fused_ordering(262) 00:08:36.644 fused_ordering(263) 00:08:36.644 fused_ordering(264) 00:08:36.644 fused_ordering(265) 00:08:36.644 fused_ordering(266) 00:08:36.644 fused_ordering(267) 00:08:36.644 fused_ordering(268) 00:08:36.644 fused_ordering(269) 00:08:36.644 fused_ordering(270) 00:08:36.644 fused_ordering(271) 00:08:36.644 fused_ordering(272) 00:08:36.644 fused_ordering(273) 00:08:36.644 fused_ordering(274) 00:08:36.644 fused_ordering(275) 00:08:36.644 fused_ordering(276) 00:08:36.644 fused_ordering(277) 00:08:36.644 fused_ordering(278) 00:08:36.644 fused_ordering(279) 00:08:36.644 fused_ordering(280) 00:08:36.644 fused_ordering(281) 00:08:36.644 fused_ordering(282) 00:08:36.644 fused_ordering(283) 00:08:36.644 fused_ordering(284) 00:08:36.644 fused_ordering(285) 00:08:36.644 fused_ordering(286) 00:08:36.644 fused_ordering(287) 00:08:36.644 fused_ordering(288) 00:08:36.644 fused_ordering(289) 00:08:36.644 fused_ordering(290) 00:08:36.644 fused_ordering(291) 00:08:36.644 fused_ordering(292) 00:08:36.644 fused_ordering(293) 00:08:36.644 fused_ordering(294) 00:08:36.644 fused_ordering(295) 00:08:36.644 fused_ordering(296) 00:08:36.644 fused_ordering(297) 00:08:36.644 fused_ordering(298) 00:08:36.644 fused_ordering(299) 00:08:36.644 fused_ordering(300) 00:08:36.644 fused_ordering(301) 00:08:36.644 fused_ordering(302) 00:08:36.644 fused_ordering(303) 00:08:36.644 fused_ordering(304) 00:08:36.644 fused_ordering(305) 00:08:36.644 fused_ordering(306) 00:08:36.644 fused_ordering(307) 00:08:36.644 fused_ordering(308) 00:08:36.644 fused_ordering(309) 00:08:36.644 fused_ordering(310) 00:08:36.644 fused_ordering(311) 00:08:36.644 fused_ordering(312) 00:08:36.644 fused_ordering(313) 00:08:36.644 fused_ordering(314) 00:08:36.644 fused_ordering(315) 00:08:36.644 fused_ordering(316) 00:08:36.644 fused_ordering(317) 00:08:36.644 fused_ordering(318) 00:08:36.644 fused_ordering(319) 00:08:36.644 fused_ordering(320) 00:08:36.644 fused_ordering(321) 00:08:36.644 fused_ordering(322) 00:08:36.644 fused_ordering(323) 00:08:36.644 fused_ordering(324) 00:08:36.644 fused_ordering(325) 00:08:36.644 fused_ordering(326) 00:08:36.644 fused_ordering(327) 00:08:36.644 fused_ordering(328) 00:08:36.644 fused_ordering(329) 00:08:36.644 fused_ordering(330) 00:08:36.644 fused_ordering(331) 00:08:36.644 fused_ordering(332) 00:08:36.644 fused_ordering(333) 00:08:36.644 fused_ordering(334) 00:08:36.644 fused_ordering(335) 00:08:36.644 fused_ordering(336) 00:08:36.644 fused_ordering(337) 00:08:36.644 fused_ordering(338) 00:08:36.644 fused_ordering(339) 00:08:36.644 fused_ordering(340) 00:08:36.644 fused_ordering(341) 00:08:36.644 fused_ordering(342) 00:08:36.644 fused_ordering(343) 00:08:36.644 fused_ordering(344) 00:08:36.644 fused_ordering(345) 00:08:36.644 fused_ordering(346) 00:08:36.644 fused_ordering(347) 00:08:36.644 fused_ordering(348) 00:08:36.644 fused_ordering(349) 00:08:36.644 fused_ordering(350) 00:08:36.644 fused_ordering(351) 00:08:36.644 fused_ordering(352) 00:08:36.644 fused_ordering(353) 00:08:36.644 fused_ordering(354) 00:08:36.644 fused_ordering(355) 00:08:36.644 fused_ordering(356) 00:08:36.644 fused_ordering(357) 00:08:36.644 fused_ordering(358) 00:08:36.644 fused_ordering(359) 00:08:36.644 fused_ordering(360) 00:08:36.644 fused_ordering(361) 00:08:36.644 fused_ordering(362) 00:08:36.644 fused_ordering(363) 00:08:36.644 fused_ordering(364) 00:08:36.644 fused_ordering(365) 00:08:36.644 fused_ordering(366) 00:08:36.644 fused_ordering(367) 00:08:36.644 fused_ordering(368) 00:08:36.644 fused_ordering(369) 00:08:36.644 fused_ordering(370) 00:08:36.644 fused_ordering(371) 00:08:36.644 fused_ordering(372) 00:08:36.644 fused_ordering(373) 00:08:36.644 fused_ordering(374) 00:08:36.644 fused_ordering(375) 00:08:36.644 fused_ordering(376) 00:08:36.644 fused_ordering(377) 00:08:36.644 fused_ordering(378) 00:08:36.644 fused_ordering(379) 00:08:36.644 fused_ordering(380) 00:08:36.644 fused_ordering(381) 00:08:36.644 fused_ordering(382) 00:08:36.644 fused_ordering(383) 00:08:36.644 fused_ordering(384) 00:08:36.644 fused_ordering(385) 00:08:36.644 fused_ordering(386) 00:08:36.644 fused_ordering(387) 00:08:36.644 fused_ordering(388) 00:08:36.644 fused_ordering(389) 00:08:36.644 fused_ordering(390) 00:08:36.644 fused_ordering(391) 00:08:36.644 fused_ordering(392) 00:08:36.644 fused_ordering(393) 00:08:36.644 fused_ordering(394) 00:08:36.644 fused_ordering(395) 00:08:36.644 fused_ordering(396) 00:08:36.644 fused_ordering(397) 00:08:36.644 fused_ordering(398) 00:08:36.644 fused_ordering(399) 00:08:36.644 fused_ordering(400) 00:08:36.644 fused_ordering(401) 00:08:36.644 fused_ordering(402) 00:08:36.644 fused_ordering(403) 00:08:36.644 fused_ordering(404) 00:08:36.644 fused_ordering(405) 00:08:36.644 fused_ordering(406) 00:08:36.644 fused_ordering(407) 00:08:36.644 fused_ordering(408) 00:08:36.644 fused_ordering(409) 00:08:36.644 fused_ordering(410) 00:08:36.903 fused_ordering(411) 00:08:36.903 fused_ordering(412) 00:08:36.903 fused_ordering(413) 00:08:36.903 fused_ordering(414) 00:08:36.903 fused_ordering(415) 00:08:36.903 fused_ordering(416) 00:08:36.903 fused_ordering(417) 00:08:36.903 fused_ordering(418) 00:08:36.903 fused_ordering(419) 00:08:36.903 fused_ordering(420) 00:08:36.903 fused_ordering(421) 00:08:36.903 fused_ordering(422) 00:08:36.903 fused_ordering(423) 00:08:36.903 fused_ordering(424) 00:08:36.903 fused_ordering(425) 00:08:36.903 fused_ordering(426) 00:08:36.903 fused_ordering(427) 00:08:36.903 fused_ordering(428) 00:08:36.903 fused_ordering(429) 00:08:36.903 fused_ordering(430) 00:08:36.903 fused_ordering(431) 00:08:36.903 fused_ordering(432) 00:08:36.903 fused_ordering(433) 00:08:36.903 fused_ordering(434) 00:08:36.903 fused_ordering(435) 00:08:36.903 fused_ordering(436) 00:08:36.903 fused_ordering(437) 00:08:36.903 fused_ordering(438) 00:08:36.903 fused_ordering(439) 00:08:36.903 fused_ordering(440) 00:08:36.903 fused_ordering(441) 00:08:36.903 fused_ordering(442) 00:08:36.903 fused_ordering(443) 00:08:36.903 fused_ordering(444) 00:08:36.903 fused_ordering(445) 00:08:36.903 fused_ordering(446) 00:08:36.903 fused_ordering(447) 00:08:36.903 fused_ordering(448) 00:08:36.903 fused_ordering(449) 00:08:36.903 fused_ordering(450) 00:08:36.903 fused_ordering(451) 00:08:36.903 fused_ordering(452) 00:08:36.903 fused_ordering(453) 00:08:36.903 fused_ordering(454) 00:08:36.903 fused_ordering(455) 00:08:36.903 fused_ordering(456) 00:08:36.903 fused_ordering(457) 00:08:36.903 fused_ordering(458) 00:08:36.903 fused_ordering(459) 00:08:36.903 fused_ordering(460) 00:08:36.903 fused_ordering(461) 00:08:36.903 fused_ordering(462) 00:08:36.903 fused_ordering(463) 00:08:36.903 fused_ordering(464) 00:08:36.903 fused_ordering(465) 00:08:36.903 fused_ordering(466) 00:08:36.903 fused_ordering(467) 00:08:36.903 fused_ordering(468) 00:08:36.903 fused_ordering(469) 00:08:36.903 fused_ordering(470) 00:08:36.903 fused_ordering(471) 00:08:36.903 fused_ordering(472) 00:08:36.903 fused_ordering(473) 00:08:36.903 fused_ordering(474) 00:08:36.903 fused_ordering(475) 00:08:36.903 fused_ordering(476) 00:08:36.903 fused_ordering(477) 00:08:36.903 fused_ordering(478) 00:08:36.903 fused_ordering(479) 00:08:36.903 fused_ordering(480) 00:08:36.903 fused_ordering(481) 00:08:36.903 fused_ordering(482) 00:08:36.903 fused_ordering(483) 00:08:36.903 fused_ordering(484) 00:08:36.903 fused_ordering(485) 00:08:36.903 fused_ordering(486) 00:08:36.903 fused_ordering(487) 00:08:36.903 fused_ordering(488) 00:08:36.903 fused_ordering(489) 00:08:36.903 fused_ordering(490) 00:08:36.903 fused_ordering(491) 00:08:36.903 fused_ordering(492) 00:08:36.903 fused_ordering(493) 00:08:36.903 fused_ordering(494) 00:08:36.903 fused_ordering(495) 00:08:36.903 fused_ordering(496) 00:08:36.903 fused_ordering(497) 00:08:36.903 fused_ordering(498) 00:08:36.903 fused_ordering(499) 00:08:36.903 fused_ordering(500) 00:08:36.903 fused_ordering(501) 00:08:36.903 fused_ordering(502) 00:08:36.903 fused_ordering(503) 00:08:36.903 fused_ordering(504) 00:08:36.903 fused_ordering(505) 00:08:36.903 fused_ordering(506) 00:08:36.903 fused_ordering(507) 00:08:36.903 fused_ordering(508) 00:08:36.903 fused_ordering(509) 00:08:36.903 fused_ordering(510) 00:08:36.903 fused_ordering(511) 00:08:36.903 fused_ordering(512) 00:08:36.903 fused_ordering(513) 00:08:36.903 fused_ordering(514) 00:08:36.903 fused_ordering(515) 00:08:36.903 fused_ordering(516) 00:08:36.903 fused_ordering(517) 00:08:36.903 fused_ordering(518) 00:08:36.903 fused_ordering(519) 00:08:36.903 fused_ordering(520) 00:08:36.903 fused_ordering(521) 00:08:36.903 fused_ordering(522) 00:08:36.903 fused_ordering(523) 00:08:36.903 fused_ordering(524) 00:08:36.903 fused_ordering(525) 00:08:36.903 fused_ordering(526) 00:08:36.903 fused_ordering(527) 00:08:36.903 fused_ordering(528) 00:08:36.903 fused_ordering(529) 00:08:36.903 fused_ordering(530) 00:08:36.903 fused_ordering(531) 00:08:36.903 fused_ordering(532) 00:08:36.903 fused_ordering(533) 00:08:36.903 fused_ordering(534) 00:08:36.903 fused_ordering(535) 00:08:36.903 fused_ordering(536) 00:08:36.903 fused_ordering(537) 00:08:36.903 fused_ordering(538) 00:08:36.903 fused_ordering(539) 00:08:36.903 fused_ordering(540) 00:08:36.903 fused_ordering(541) 00:08:36.903 fused_ordering(542) 00:08:36.903 fused_ordering(543) 00:08:36.903 fused_ordering(544) 00:08:36.903 fused_ordering(545) 00:08:36.903 fused_ordering(546) 00:08:36.903 fused_ordering(547) 00:08:36.903 fused_ordering(548) 00:08:36.903 fused_ordering(549) 00:08:36.903 fused_ordering(550) 00:08:36.903 fused_ordering(551) 00:08:36.903 fused_ordering(552) 00:08:36.903 fused_ordering(553) 00:08:36.903 fused_ordering(554) 00:08:36.903 fused_ordering(555) 00:08:36.903 fused_ordering(556) 00:08:36.903 fused_ordering(557) 00:08:36.903 fused_ordering(558) 00:08:36.903 fused_ordering(559) 00:08:36.903 fused_ordering(560) 00:08:36.903 fused_ordering(561) 00:08:36.903 fused_ordering(562) 00:08:36.903 fused_ordering(563) 00:08:36.903 fused_ordering(564) 00:08:36.903 fused_ordering(565) 00:08:36.903 fused_ordering(566) 00:08:36.903 fused_ordering(567) 00:08:36.903 fused_ordering(568) 00:08:36.903 fused_ordering(569) 00:08:36.903 fused_ordering(570) 00:08:36.903 fused_ordering(571) 00:08:36.903 fused_ordering(572) 00:08:36.903 fused_ordering(573) 00:08:36.903 fused_ordering(574) 00:08:36.903 fused_ordering(575) 00:08:36.903 fused_ordering(576) 00:08:36.903 fused_ordering(577) 00:08:36.903 fused_ordering(578) 00:08:36.903 fused_ordering(579) 00:08:36.903 fused_ordering(580) 00:08:36.903 fused_ordering(581) 00:08:36.903 fused_ordering(582) 00:08:36.903 fused_ordering(583) 00:08:36.903 fused_ordering(584) 00:08:36.903 fused_ordering(585) 00:08:36.903 fused_ordering(586) 00:08:36.903 fused_ordering(587) 00:08:36.903 fused_ordering(588) 00:08:36.903 fused_ordering(589) 00:08:36.903 fused_ordering(590) 00:08:36.903 fused_ordering(591) 00:08:36.903 fused_ordering(592) 00:08:36.903 fused_ordering(593) 00:08:36.903 fused_ordering(594) 00:08:36.903 fused_ordering(595) 00:08:36.903 fused_ordering(596) 00:08:36.903 fused_ordering(597) 00:08:36.903 fused_ordering(598) 00:08:36.903 fused_ordering(599) 00:08:36.903 fused_ordering(600) 00:08:36.903 fused_ordering(601) 00:08:36.903 fused_ordering(602) 00:08:36.903 fused_ordering(603) 00:08:36.903 fused_ordering(604) 00:08:36.903 fused_ordering(605) 00:08:36.903 fused_ordering(606) 00:08:36.903 fused_ordering(607) 00:08:36.903 fused_ordering(608) 00:08:36.903 fused_ordering(609) 00:08:36.903 fused_ordering(610) 00:08:36.903 fused_ordering(611) 00:08:36.903 fused_ordering(612) 00:08:36.903 fused_ordering(613) 00:08:36.903 fused_ordering(614) 00:08:36.903 fused_ordering(615) 00:08:37.468 fused_ordering(616) 00:08:37.468 fused_ordering(617) 00:08:37.468 fused_ordering(618) 00:08:37.468 fused_ordering(619) 00:08:37.468 fused_ordering(620) 00:08:37.468 fused_ordering(621) 00:08:37.468 fused_ordering(622) 00:08:37.468 fused_ordering(623) 00:08:37.468 fused_ordering(624) 00:08:37.468 fused_ordering(625) 00:08:37.468 fused_ordering(626) 00:08:37.468 fused_ordering(627) 00:08:37.468 fused_ordering(628) 00:08:37.468 fused_ordering(629) 00:08:37.468 fused_ordering(630) 00:08:37.468 fused_ordering(631) 00:08:37.468 fused_ordering(632) 00:08:37.468 fused_ordering(633) 00:08:37.468 fused_ordering(634) 00:08:37.468 fused_ordering(635) 00:08:37.468 fused_ordering(636) 00:08:37.468 fused_ordering(637) 00:08:37.468 fused_ordering(638) 00:08:37.468 fused_ordering(639) 00:08:37.468 fused_ordering(640) 00:08:37.468 fused_ordering(641) 00:08:37.468 fused_ordering(642) 00:08:37.468 fused_ordering(643) 00:08:37.468 fused_ordering(644) 00:08:37.468 fused_ordering(645) 00:08:37.468 fused_ordering(646) 00:08:37.468 fused_ordering(647) 00:08:37.468 fused_ordering(648) 00:08:37.468 fused_ordering(649) 00:08:37.468 fused_ordering(650) 00:08:37.468 fused_ordering(651) 00:08:37.468 fused_ordering(652) 00:08:37.468 fused_ordering(653) 00:08:37.468 fused_ordering(654) 00:08:37.468 fused_ordering(655) 00:08:37.468 fused_ordering(656) 00:08:37.468 fused_ordering(657) 00:08:37.468 fused_ordering(658) 00:08:37.468 fused_ordering(659) 00:08:37.468 fused_ordering(660) 00:08:37.468 fused_ordering(661) 00:08:37.468 fused_ordering(662) 00:08:37.468 fused_ordering(663) 00:08:37.468 fused_ordering(664) 00:08:37.468 fused_ordering(665) 00:08:37.468 fused_ordering(666) 00:08:37.468 fused_ordering(667) 00:08:37.468 fused_ordering(668) 00:08:37.468 fused_ordering(669) 00:08:37.468 fused_ordering(670) 00:08:37.468 fused_ordering(671) 00:08:37.468 fused_ordering(672) 00:08:37.468 fused_ordering(673) 00:08:37.468 fused_ordering(674) 00:08:37.468 fused_ordering(675) 00:08:37.468 fused_ordering(676) 00:08:37.468 fused_ordering(677) 00:08:37.468 fused_ordering(678) 00:08:37.468 fused_ordering(679) 00:08:37.468 fused_ordering(680) 00:08:37.468 fused_ordering(681) 00:08:37.468 fused_ordering(682) 00:08:37.468 fused_ordering(683) 00:08:37.468 fused_ordering(684) 00:08:37.468 fused_ordering(685) 00:08:37.468 fused_ordering(686) 00:08:37.468 fused_ordering(687) 00:08:37.468 fused_ordering(688) 00:08:37.468 fused_ordering(689) 00:08:37.468 fused_ordering(690) 00:08:37.468 fused_ordering(691) 00:08:37.468 fused_ordering(692) 00:08:37.468 fused_ordering(693) 00:08:37.468 fused_ordering(694) 00:08:37.468 fused_ordering(695) 00:08:37.468 fused_ordering(696) 00:08:37.468 fused_ordering(697) 00:08:37.468 fused_ordering(698) 00:08:37.468 fused_ordering(699) 00:08:37.468 fused_ordering(700) 00:08:37.468 fused_ordering(701) 00:08:37.469 fused_ordering(702) 00:08:37.469 fused_ordering(703) 00:08:37.469 fused_ordering(704) 00:08:37.469 fused_ordering(705) 00:08:37.469 fused_ordering(706) 00:08:37.469 fused_ordering(707) 00:08:37.469 fused_ordering(708) 00:08:37.469 fused_ordering(709) 00:08:37.469 fused_ordering(710) 00:08:37.469 fused_ordering(711) 00:08:37.469 fused_ordering(712) 00:08:37.469 fused_ordering(713) 00:08:37.469 fused_ordering(714) 00:08:37.469 fused_ordering(715) 00:08:37.469 fused_ordering(716) 00:08:37.469 fused_ordering(717) 00:08:37.469 fused_ordering(718) 00:08:37.469 fused_ordering(719) 00:08:37.469 fused_ordering(720) 00:08:37.469 fused_ordering(721) 00:08:37.469 fused_ordering(722) 00:08:37.469 fused_ordering(723) 00:08:37.469 fused_ordering(724) 00:08:37.469 fused_ordering(725) 00:08:37.469 fused_ordering(726) 00:08:37.469 fused_ordering(727) 00:08:37.469 fused_ordering(728) 00:08:37.469 fused_ordering(729) 00:08:37.469 fused_ordering(730) 00:08:37.469 fused_ordering(731) 00:08:37.469 fused_ordering(732) 00:08:37.469 fused_ordering(733) 00:08:37.469 fused_ordering(734) 00:08:37.469 fused_ordering(735) 00:08:37.469 fused_ordering(736) 00:08:37.469 fused_ordering(737) 00:08:37.469 fused_ordering(738) 00:08:37.469 fused_ordering(739) 00:08:37.469 fused_ordering(740) 00:08:37.469 fused_ordering(741) 00:08:37.469 fused_ordering(742) 00:08:37.469 fused_ordering(743) 00:08:37.469 fused_ordering(744) 00:08:37.469 fused_ordering(745) 00:08:37.469 fused_ordering(746) 00:08:37.469 fused_ordering(747) 00:08:37.469 fused_ordering(748) 00:08:37.469 fused_ordering(749) 00:08:37.469 fused_ordering(750) 00:08:37.469 fused_ordering(751) 00:08:37.469 fused_ordering(752) 00:08:37.469 fused_ordering(753) 00:08:37.469 fused_ordering(754) 00:08:37.469 fused_ordering(755) 00:08:37.469 fused_ordering(756) 00:08:37.469 fused_ordering(757) 00:08:37.469 fused_ordering(758) 00:08:37.469 fused_ordering(759) 00:08:37.469 fused_ordering(760) 00:08:37.469 fused_ordering(761) 00:08:37.469 fused_ordering(762) 00:08:37.469 fused_ordering(763) 00:08:37.469 fused_ordering(764) 00:08:37.469 fused_ordering(765) 00:08:37.469 fused_ordering(766) 00:08:37.469 fused_ordering(767) 00:08:37.469 fused_ordering(768) 00:08:37.469 fused_ordering(769) 00:08:37.469 fused_ordering(770) 00:08:37.469 fused_ordering(771) 00:08:37.469 fused_ordering(772) 00:08:37.469 fused_ordering(773) 00:08:37.469 fused_ordering(774) 00:08:37.469 fused_ordering(775) 00:08:37.469 fused_ordering(776) 00:08:37.469 fused_ordering(777) 00:08:37.469 fused_ordering(778) 00:08:37.469 fused_ordering(779) 00:08:37.469 fused_ordering(780) 00:08:37.469 fused_ordering(781) 00:08:37.469 fused_ordering(782) 00:08:37.469 fused_ordering(783) 00:08:37.469 fused_ordering(784) 00:08:37.469 fused_ordering(785) 00:08:37.469 fused_ordering(786) 00:08:37.469 fused_ordering(787) 00:08:37.469 fused_ordering(788) 00:08:37.469 fused_ordering(789) 00:08:37.469 fused_ordering(790) 00:08:37.469 fused_ordering(791) 00:08:37.469 fused_ordering(792) 00:08:37.469 fused_ordering(793) 00:08:37.469 fused_ordering(794) 00:08:37.469 fused_ordering(795) 00:08:37.469 fused_ordering(796) 00:08:37.469 fused_ordering(797) 00:08:37.469 fused_ordering(798) 00:08:37.469 fused_ordering(799) 00:08:37.469 fused_ordering(800) 00:08:37.469 fused_ordering(801) 00:08:37.469 fused_ordering(802) 00:08:37.469 fused_ordering(803) 00:08:37.469 fused_ordering(804) 00:08:37.469 fused_ordering(805) 00:08:37.469 fused_ordering(806) 00:08:37.469 fused_ordering(807) 00:08:37.469 fused_ordering(808) 00:08:37.469 fused_ordering(809) 00:08:37.469 fused_ordering(810) 00:08:37.469 fused_ordering(811) 00:08:37.469 fused_ordering(812) 00:08:37.469 fused_ordering(813) 00:08:37.469 fused_ordering(814) 00:08:37.469 fused_ordering(815) 00:08:37.469 fused_ordering(816) 00:08:37.469 fused_ordering(817) 00:08:37.469 fused_ordering(818) 00:08:37.469 fused_ordering(819) 00:08:37.469 fused_ordering(820) 00:08:38.053 fused_ordering(821) 00:08:38.053 fused_ordering(822) 00:08:38.053 fused_ordering(823) 00:08:38.053 fused_ordering(824) 00:08:38.053 fused_ordering(825) 00:08:38.053 fused_ordering(826) 00:08:38.053 fused_ordering(827) 00:08:38.053 fused_ordering(828) 00:08:38.053 fused_ordering(829) 00:08:38.053 fused_ordering(830) 00:08:38.053 fused_ordering(831) 00:08:38.053 fused_ordering(832) 00:08:38.053 fused_ordering(833) 00:08:38.053 fused_ordering(834) 00:08:38.053 fused_ordering(835) 00:08:38.053 fused_ordering(836) 00:08:38.053 fused_ordering(837) 00:08:38.053 fused_ordering(838) 00:08:38.053 fused_ordering(839) 00:08:38.053 fused_ordering(840) 00:08:38.053 fused_ordering(841) 00:08:38.053 fused_ordering(842) 00:08:38.053 fused_ordering(843) 00:08:38.053 fused_ordering(844) 00:08:38.053 fused_ordering(845) 00:08:38.053 fused_ordering(846) 00:08:38.053 fused_ordering(847) 00:08:38.053 fused_ordering(848) 00:08:38.053 fused_ordering(849) 00:08:38.053 fused_ordering(850) 00:08:38.053 fused_ordering(851) 00:08:38.053 fused_ordering(852) 00:08:38.053 fused_ordering(853) 00:08:38.053 fused_ordering(854) 00:08:38.053 fused_ordering(855) 00:08:38.053 fused_ordering(856) 00:08:38.053 fused_ordering(857) 00:08:38.053 fused_ordering(858) 00:08:38.053 fused_ordering(859) 00:08:38.053 fused_ordering(860) 00:08:38.053 fused_ordering(861) 00:08:38.053 fused_ordering(862) 00:08:38.053 fused_ordering(863) 00:08:38.053 fused_ordering(864) 00:08:38.053 fused_ordering(865) 00:08:38.053 fused_ordering(866) 00:08:38.053 fused_ordering(867) 00:08:38.053 fused_ordering(868) 00:08:38.053 fused_ordering(869) 00:08:38.053 fused_ordering(870) 00:08:38.053 fused_ordering(871) 00:08:38.053 fused_ordering(872) 00:08:38.053 fused_ordering(873) 00:08:38.053 fused_ordering(874) 00:08:38.053 fused_ordering(875) 00:08:38.053 fused_ordering(876) 00:08:38.053 fused_ordering(877) 00:08:38.053 fused_ordering(878) 00:08:38.053 fused_ordering(879) 00:08:38.053 fused_ordering(880) 00:08:38.053 fused_ordering(881) 00:08:38.053 fused_ordering(882) 00:08:38.053 fused_ordering(883) 00:08:38.053 fused_ordering(884) 00:08:38.053 fused_ordering(885) 00:08:38.053 fused_ordering(886) 00:08:38.053 fused_ordering(887) 00:08:38.053 fused_ordering(888) 00:08:38.053 fused_ordering(889) 00:08:38.053 fused_ordering(890) 00:08:38.053 fused_ordering(891) 00:08:38.053 fused_ordering(892) 00:08:38.053 fused_ordering(893) 00:08:38.053 fused_ordering(894) 00:08:38.053 fused_ordering(895) 00:08:38.053 fused_ordering(896) 00:08:38.053 fused_ordering(897) 00:08:38.053 fused_ordering(898) 00:08:38.053 fused_ordering(899) 00:08:38.053 fused_ordering(900) 00:08:38.053 fused_ordering(901) 00:08:38.053 fused_ordering(902) 00:08:38.053 fused_ordering(903) 00:08:38.053 fused_ordering(904) 00:08:38.053 fused_ordering(905) 00:08:38.053 fused_ordering(906) 00:08:38.053 fused_ordering(907) 00:08:38.053 fused_ordering(908) 00:08:38.053 fused_ordering(909) 00:08:38.053 fused_ordering(910) 00:08:38.053 fused_ordering(911) 00:08:38.053 fused_ordering(912) 00:08:38.053 fused_ordering(913) 00:08:38.053 fused_ordering(914) 00:08:38.053 fused_ordering(915) 00:08:38.053 fused_ordering(916) 00:08:38.053 fused_ordering(917) 00:08:38.053 fused_ordering(918) 00:08:38.053 fused_ordering(919) 00:08:38.053 fused_ordering(920) 00:08:38.053 fused_ordering(921) 00:08:38.053 fused_ordering(922) 00:08:38.053 fused_ordering(923) 00:08:38.053 fused_ordering(924) 00:08:38.053 fused_ordering(925) 00:08:38.053 fused_ordering(926) 00:08:38.053 fused_ordering(927) 00:08:38.053 fused_ordering(928) 00:08:38.053 fused_ordering(929) 00:08:38.053 fused_ordering(930) 00:08:38.053 fused_ordering(931) 00:08:38.053 fused_ordering(932) 00:08:38.053 fused_ordering(933) 00:08:38.053 fused_ordering(934) 00:08:38.053 fused_ordering(935) 00:08:38.053 fused_ordering(936) 00:08:38.053 fused_ordering(937) 00:08:38.053 fused_ordering(938) 00:08:38.053 fused_ordering(939) 00:08:38.053 fused_ordering(940) 00:08:38.053 fused_ordering(941) 00:08:38.053 fused_ordering(942) 00:08:38.053 fused_ordering(943) 00:08:38.053 fused_ordering(944) 00:08:38.053 fused_ordering(945) 00:08:38.053 fused_ordering(946) 00:08:38.053 fused_ordering(947) 00:08:38.053 fused_ordering(948) 00:08:38.053 fused_ordering(949) 00:08:38.053 fused_ordering(950) 00:08:38.053 fused_ordering(951) 00:08:38.053 fused_ordering(952) 00:08:38.053 fused_ordering(953) 00:08:38.053 fused_ordering(954) 00:08:38.053 fused_ordering(955) 00:08:38.053 fused_ordering(956) 00:08:38.053 fused_ordering(957) 00:08:38.053 fused_ordering(958) 00:08:38.053 fused_ordering(959) 00:08:38.053 fused_ordering(960) 00:08:38.053 fused_ordering(961) 00:08:38.053 fused_ordering(962) 00:08:38.053 fused_ordering(963) 00:08:38.053 fused_ordering(964) 00:08:38.053 fused_ordering(965) 00:08:38.053 fused_ordering(966) 00:08:38.053 fused_ordering(967) 00:08:38.053 fused_ordering(968) 00:08:38.053 fused_ordering(969) 00:08:38.053 fused_ordering(970) 00:08:38.053 fused_ordering(971) 00:08:38.053 fused_ordering(972) 00:08:38.053 fused_ordering(973) 00:08:38.053 fused_ordering(974) 00:08:38.053 fused_ordering(975) 00:08:38.053 fused_ordering(976) 00:08:38.053 fused_ordering(977) 00:08:38.053 fused_ordering(978) 00:08:38.053 fused_ordering(979) 00:08:38.053 fused_ordering(980) 00:08:38.053 fused_ordering(981) 00:08:38.053 fused_ordering(982) 00:08:38.053 fused_ordering(983) 00:08:38.053 fused_ordering(984) 00:08:38.053 fused_ordering(985) 00:08:38.053 fused_ordering(986) 00:08:38.053 fused_ordering(987) 00:08:38.053 fused_ordering(988) 00:08:38.053 fused_ordering(989) 00:08:38.053 fused_ordering(990) 00:08:38.053 fused_ordering(991) 00:08:38.053 fused_ordering(992) 00:08:38.053 fused_ordering(993) 00:08:38.053 fused_ordering(994) 00:08:38.053 fused_ordering(995) 00:08:38.053 fused_ordering(996) 00:08:38.053 fused_ordering(997) 00:08:38.053 fused_ordering(998) 00:08:38.053 fused_ordering(999) 00:08:38.053 fused_ordering(1000) 00:08:38.053 fused_ordering(1001) 00:08:38.053 fused_ordering(1002) 00:08:38.053 fused_ordering(1003) 00:08:38.053 fused_ordering(1004) 00:08:38.053 fused_ordering(1005) 00:08:38.053 fused_ordering(1006) 00:08:38.053 fused_ordering(1007) 00:08:38.053 fused_ordering(1008) 00:08:38.053 fused_ordering(1009) 00:08:38.053 fused_ordering(1010) 00:08:38.053 fused_ordering(1011) 00:08:38.053 fused_ordering(1012) 00:08:38.053 fused_ordering(1013) 00:08:38.053 fused_ordering(1014) 00:08:38.053 fused_ordering(1015) 00:08:38.053 fused_ordering(1016) 00:08:38.053 fused_ordering(1017) 00:08:38.053 fused_ordering(1018) 00:08:38.053 fused_ordering(1019) 00:08:38.053 fused_ordering(1020) 00:08:38.054 fused_ordering(1021) 00:08:38.054 fused_ordering(1022) 00:08:38.054 fused_ordering(1023) 00:08:38.054 16:23:55 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:08:38.054 16:23:55 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:08:38.054 16:23:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:38.054 16:23:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:08:38.054 16:23:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:38.054 16:23:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:08:38.054 16:23:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:38.054 16:23:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:38.054 rmmod nvme_tcp 00:08:38.054 rmmod nvme_fabrics 00:08:38.054 rmmod nvme_keyring 00:08:38.054 16:23:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:38.054 16:23:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:08:38.054 16:23:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:08:38.054 16:23:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 71602 ']' 00:08:38.054 16:23:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 71602 00:08:38.054 16:23:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 71602 ']' 00:08:38.054 16:23:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 71602 00:08:38.054 16:23:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:08:38.054 16:23:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:38.054 16:23:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71602 00:08:38.054 killing process with pid 71602 00:08:38.054 16:23:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:38.054 16:23:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:38.054 16:23:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71602' 00:08:38.054 16:23:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 71602 00:08:38.054 16:23:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 71602 00:08:38.311 16:23:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:38.311 16:23:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:38.311 16:23:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:38.312 16:23:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:38.312 16:23:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:38.312 16:23:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.312 16:23:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.312 16:23:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.312 16:23:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:38.312 ************************************ 00:08:38.312 END TEST nvmf_fused_ordering 00:08:38.312 ************************************ 00:08:38.312 00:08:38.312 real 0m4.299s 00:08:38.312 user 0m5.166s 00:08:38.312 sys 0m1.459s 00:08:38.312 16:23:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:38.312 16:23:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:38.312 16:23:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:38.312 16:23:56 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:38.312 16:23:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:38.312 16:23:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.312 16:23:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:38.312 ************************************ 00:08:38.312 START TEST nvmf_delete_subsystem 00:08:38.312 ************************************ 00:08:38.312 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:38.570 * Looking for test storage... 00:08:38.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.570 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:38.571 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:38.571 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:38.571 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:38.571 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:38.571 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:38.571 Cannot find device "nvmf_tgt_br" 00:08:38.571 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:08:38.571 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:38.571 Cannot find device "nvmf_tgt_br2" 00:08:38.571 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:08:38.571 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:38.571 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:38.571 Cannot find device "nvmf_tgt_br" 00:08:38.571 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:08:38.571 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:38.571 Cannot find device "nvmf_tgt_br2" 00:08:38.571 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:08:38.571 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:38.571 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:38.571 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:38.571 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:38.571 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:08:38.571 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:38.571 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:38.571 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:08:38.571 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:38.571 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:38.571 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:38.571 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:38.571 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:38.571 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:38.571 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:38.571 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:38.571 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:38.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:08:38.829 00:08:38.829 --- 10.0.0.2 ping statistics --- 00:08:38.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.829 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:38.829 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:38.829 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:08:38.829 00:08:38.829 --- 10.0.0.3 ping statistics --- 00:08:38.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.829 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:38.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:08:38.829 00:08:38.829 --- 10.0.0.1 ping statistics --- 00:08:38.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.829 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=71863 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 71863 00:08:38.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 71863 ']' 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:38.829 16:23:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:38.829 [2024-07-21 16:23:56.969887] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:38.829 [2024-07-21 16:23:56.970223] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.087 [2024-07-21 16:23:57.111534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:39.087 [2024-07-21 16:23:57.186926] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.087 [2024-07-21 16:23:57.186980] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.087 [2024-07-21 16:23:57.186990] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.087 [2024-07-21 16:23:57.186997] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.087 [2024-07-21 16:23:57.187003] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.087 [2024-07-21 16:23:57.187105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.087 [2024-07-21 16:23:57.187361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.020 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:40.020 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:08:40.020 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:40.020 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:40.020 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.020 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.020 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:40.020 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.020 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.020 [2024-07-21 16:23:57.950096] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.020 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.020 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:40.020 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.020 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.020 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.020 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.020 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.020 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.020 [2024-07-21 16:23:57.970314] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.020 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.020 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:40.020 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.020 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.020 NULL1 00:08:40.020 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.020 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:40.020 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.020 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.020 Delay0 00:08:40.020 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.020 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:40.020 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.020 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.021 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.021 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=71914 00:08:40.021 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:40.021 16:23:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:40.021 [2024-07-21 16:23:58.171011] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:41.919 16:24:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:41.919 16:24:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.919 16:24:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 starting I/O failed: -6 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 starting I/O failed: -6 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 starting I/O failed: -6 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 starting I/O failed: -6 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 starting I/O failed: -6 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 starting I/O failed: -6 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 starting I/O failed: -6 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 starting I/O failed: -6 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 starting I/O failed: -6 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 starting I/O failed: -6 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 starting I/O failed: -6 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 [2024-07-21 16:24:00.204177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9eb570 is same with the state(5) to be set 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 [2024-07-21 16:24:00.204703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e8910 is same with the state(5) to be set 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 starting I/O failed: -6 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 starting I/O failed: -6 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 starting I/O failed: -6 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 starting I/O failed: -6 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Write completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 starting I/O failed: -6 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.179 Read completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Write completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Write completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 Write completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 Write completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Write completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Write completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 [2024-07-21 16:24:00.205732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdc5c000c00 is same with the state(5) to be set 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 Write completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Write completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Write completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 Write completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 Write completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 Write completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 Write completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 Write completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Write completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Write completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Write completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Write completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 Read completed with error (sct=0, sc=8) 00:08:42.180 starting I/O failed: -6 00:08:42.180 Write completed with error (sct=0, sc=8) 00:08:42.180 [2024-07-21 16:24:00.206214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdc5c00d310 is same with the state(5) to be set 00:08:43.127 [2024-07-21 16:24:01.184683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c9510 is same with the state(5) to be set 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 [2024-07-21 16:24:01.204657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdc5c00cff0 is same with the state(5) to be set 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 [2024-07-21 16:24:01.204957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdc5c00d630 is same with the state(5) to be set 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 [2024-07-21 16:24:01.206010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9eb390 is same with the state(5) to be set 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Write completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 Read completed with error (sct=0, sc=8) 00:08:43.127 [2024-07-21 16:24:01.206215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9eb890 is same with the state(5) to be set 00:08:43.127 Initializing NVMe Controllers 00:08:43.127 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:43.127 Controller IO queue size 128, less than required. 00:08:43.127 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:43.127 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:43.127 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:43.127 Initialization complete. Launching workers. 00:08:43.127 ======================================================== 00:08:43.127 Latency(us) 00:08:43.127 Device Information : IOPS MiB/s Average min max 00:08:43.127 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.50 0.08 889462.92 509.70 1044500.44 00:08:43.127 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 172.50 0.08 969744.91 384.58 2001092.48 00:08:43.127 ======================================================== 00:08:43.127 Total : 344.99 0.17 929603.91 384.58 2001092.48 00:08:43.127 00:08:43.127 [2024-07-21 16:24:01.207303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c9510 (9): Bad file descriptor 00:08:43.127 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:43.127 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.127 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:43.127 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71914 00:08:43.127 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71914 00:08:43.692 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (71914) - No such process 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 71914 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 71914 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 71914 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:43.692 [2024-07-21 16:24:01.731954] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=71960 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71960 00:08:43.692 16:24:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:43.950 [2024-07-21 16:24:01.905342] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:44.208 16:24:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:44.208 16:24:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71960 00:08:44.208 16:24:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:44.773 16:24:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:44.773 16:24:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71960 00:08:44.773 16:24:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:45.338 16:24:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:45.338 16:24:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71960 00:08:45.338 16:24:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:45.595 16:24:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:45.595 16:24:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71960 00:08:45.595 16:24:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:46.160 16:24:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:46.160 16:24:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71960 00:08:46.160 16:24:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:46.725 16:24:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:46.725 16:24:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71960 00:08:46.725 16:24:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:46.983 Initializing NVMe Controllers 00:08:46.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:46.983 Controller IO queue size 128, less than required. 00:08:46.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:46.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:46.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:46.983 Initialization complete. Launching workers. 00:08:46.983 ======================================================== 00:08:46.983 Latency(us) 00:08:46.983 Device Information : IOPS MiB/s Average min max 00:08:46.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002866.18 1000170.46 1041837.36 00:08:46.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004374.12 1000153.93 1012298.67 00:08:46.983 ======================================================== 00:08:46.983 Total : 256.00 0.12 1003620.15 1000153.93 1041837.36 00:08:46.983 00:08:47.241 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:47.241 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71960 00:08:47.241 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (71960) - No such process 00:08:47.241 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 71960 00:08:47.241 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:47.241 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:47.241 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:47.241 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:08:47.241 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:47.241 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:08:47.241 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:47.241 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:47.241 rmmod nvme_tcp 00:08:47.241 rmmod nvme_fabrics 00:08:47.241 rmmod nvme_keyring 00:08:47.241 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:47.241 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:08:47.241 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:08:47.241 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 71863 ']' 00:08:47.241 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 71863 00:08:47.241 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 71863 ']' 00:08:47.241 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 71863 00:08:47.241 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:08:47.241 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:47.241 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71863 00:08:47.241 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:47.241 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:47.241 killing process with pid 71863 00:08:47.241 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71863' 00:08:47.241 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 71863 00:08:47.241 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 71863 00:08:47.499 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:47.499 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:47.499 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:47.499 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:47.499 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:47.499 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.499 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:47.499 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.499 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:47.499 00:08:47.499 real 0m9.176s 00:08:47.499 user 0m28.483s 00:08:47.499 sys 0m1.559s 00:08:47.499 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:47.499 16:24:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:47.499 ************************************ 00:08:47.499 END TEST nvmf_delete_subsystem 00:08:47.499 ************************************ 00:08:47.499 16:24:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:47.499 16:24:05 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:08:47.499 16:24:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:47.499 16:24:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.499 16:24:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:47.499 ************************************ 00:08:47.499 START TEST nvmf_ns_masking 00:08:47.499 ************************************ 00:08:47.499 16:24:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:08:47.758 * Looking for test storage... 00:08:47.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=6b246a5f-a2fa-4932-b998-01560ae580b8 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=1bec956f-ad23-4f57-bb44-83b946ff8124 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=dad29623-39c5-4bf8-b506-148d2f5d37aa 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:47.759 Cannot find device "nvmf_tgt_br" 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:47.759 Cannot find device "nvmf_tgt_br2" 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:47.759 Cannot find device "nvmf_tgt_br" 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:47.759 Cannot find device "nvmf_tgt_br2" 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:47.759 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:47.759 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:47.759 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:48.018 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:48.018 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:48.018 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:48.018 16:24:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:48.018 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:48.018 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:48.018 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:48.018 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:48.018 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:48.018 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:48.018 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:48.018 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:48.018 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:48.018 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:48.018 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:48.018 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:48.019 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:48.019 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:48.019 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:48.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:08:48.019 00:08:48.019 --- 10.0.0.2 ping statistics --- 00:08:48.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.019 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:08:48.019 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:48.019 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:48.019 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:08:48.019 00:08:48.019 --- 10.0.0.3 ping statistics --- 00:08:48.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.019 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:48.019 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:48.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:08:48.019 00:08:48.019 --- 10.0.0.1 ping statistics --- 00:08:48.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.019 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:08:48.019 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.019 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:08:48.019 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:48.019 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.019 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:48.019 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:48.019 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.019 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:48.019 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:48.019 16:24:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:08:48.019 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:48.019 16:24:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:48.019 16:24:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:08:48.019 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=72203 00:08:48.019 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:08:48.019 16:24:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 72203 00:08:48.019 16:24:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 72203 ']' 00:08:48.019 16:24:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.019 16:24:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:48.019 16:24:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.019 16:24:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:48.019 16:24:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:08:48.019 [2024-07-21 16:24:06.193752] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:08:48.019 [2024-07-21 16:24:06.193889] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.278 [2024-07-21 16:24:06.335495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.278 [2024-07-21 16:24:06.434451] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.278 [2024-07-21 16:24:06.434736] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.278 [2024-07-21 16:24:06.434841] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.278 [2024-07-21 16:24:06.434862] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.278 [2024-07-21 16:24:06.434871] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.278 [2024-07-21 16:24:06.434910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.210 16:24:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:49.210 16:24:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:08:49.210 16:24:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:49.210 16:24:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:49.210 16:24:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:08:49.210 16:24:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.210 16:24:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:49.210 [2024-07-21 16:24:07.353092] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.210 16:24:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:08:49.210 16:24:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:08:49.210 16:24:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:08:49.468 Malloc1 00:08:49.468 16:24:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:08:49.726 Malloc2 00:08:49.726 16:24:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:49.984 16:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:08:50.243 16:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:50.501 [2024-07-21 16:24:08.479381] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:50.501 16:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:08:50.501 16:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dad29623-39c5-4bf8-b506-148d2f5d37aa -a 10.0.0.2 -s 4420 -i 4 00:08:50.501 16:24:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:08:50.501 16:24:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:08:50.501 16:24:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:50.501 16:24:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:50.501 16:24:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:08:52.400 16:24:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:52.400 16:24:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:52.400 16:24:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:52.658 16:24:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:52.658 16:24:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:52.658 16:24:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:08:52.658 16:24:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:08:52.658 16:24:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:08:52.658 16:24:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:08:52.658 16:24:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:08:52.658 16:24:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:08:52.658 16:24:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:52.658 16:24:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:52.658 [ 0]:0x1 00:08:52.658 16:24:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:52.658 16:24:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:52.658 16:24:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5c001671a3194ec7b0d846597a8722b6 00:08:52.658 16:24:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5c001671a3194ec7b0d846597a8722b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:52.658 16:24:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:08:52.916 16:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:08:52.916 16:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:52.917 16:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:52.917 [ 0]:0x1 00:08:52.917 16:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:52.917 16:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:52.917 16:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5c001671a3194ec7b0d846597a8722b6 00:08:52.917 16:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5c001671a3194ec7b0d846597a8722b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:52.917 16:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:08:52.917 16:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:52.917 16:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:52.917 [ 1]:0x2 00:08:52.917 16:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:52.917 16:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:53.175 16:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8aefc68e4f7c4956b941de618745efdd 00:08:53.175 16:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8aefc68e4f7c4956b941de618745efdd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:53.175 16:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:08:53.175 16:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:53.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.175 16:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.434 16:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:08:53.691 16:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:08:53.691 16:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dad29623-39c5-4bf8-b506-148d2f5d37aa -a 10.0.0.2 -s 4420 -i 4 00:08:53.691 16:24:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:08:53.691 16:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:08:53.691 16:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:53.691 16:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:08:53.691 16:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:08:53.691 16:24:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:56.245 16:24:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:56.245 [ 0]:0x2 00:08:56.245 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:56.245 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:56.245 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8aefc68e4f7c4956b941de618745efdd 00:08:56.245 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8aefc68e4f7c4956b941de618745efdd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:56.245 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:08:56.245 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:08:56.245 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:56.245 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:56.245 [ 0]:0x1 00:08:56.245 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:56.245 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:56.245 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5c001671a3194ec7b0d846597a8722b6 00:08:56.245 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5c001671a3194ec7b0d846597a8722b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:56.245 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:08:56.245 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:56.245 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:56.245 [ 1]:0x2 00:08:56.245 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:56.245 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:56.503 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8aefc68e4f7c4956b941de618745efdd 00:08:56.503 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8aefc68e4f7c4956b941de618745efdd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:56.503 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:08:56.503 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:08:56.503 16:24:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:08:56.503 16:24:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:08:56.503 16:24:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:08:56.503 16:24:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:56.503 16:24:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:08:56.503 16:24:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:56.503 16:24:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:08:56.503 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:56.503 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:56.503 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:56.503 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:56.762 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:08:56.762 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:56.762 16:24:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:08:56.762 16:24:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:56.762 16:24:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:56.762 16:24:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:56.762 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:08:56.762 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:56.762 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:56.762 [ 0]:0x2 00:08:56.762 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:56.762 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:56.762 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8aefc68e4f7c4956b941de618745efdd 00:08:56.762 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8aefc68e4f7c4956b941de618745efdd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:56.762 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:08:56.762 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:56.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.762 16:24:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:08:57.020 16:24:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:08:57.020 16:24:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dad29623-39c5-4bf8-b506-148d2f5d37aa -a 10.0.0.2 -s 4420 -i 4 00:08:57.020 16:24:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:08:57.020 16:24:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:08:57.020 16:24:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:57.020 16:24:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:08:57.020 16:24:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:08:57.020 16:24:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:59.548 [ 0]:0x1 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5c001671a3194ec7b0d846597a8722b6 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5c001671a3194ec7b0d846597a8722b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:59.548 [ 1]:0x2 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8aefc68e4f7c4956b941de618745efdd 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8aefc68e4f7c4956b941de618745efdd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:59.548 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:59.806 [ 0]:0x2 00:08:59.806 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:59.806 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:59.806 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8aefc68e4f7c4956b941de618745efdd 00:08:59.806 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8aefc68e4f7c4956b941de618745efdd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:59.806 16:24:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:08:59.806 16:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:08:59.806 16:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:08:59.806 16:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:59.806 16:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:59.806 16:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:59.806 16:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:59.806 16:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:59.806 16:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:59.806 16:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:59.806 16:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:59.806 16:24:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:00.064 [2024-07-21 16:24:18.077553] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:00.065 2024/07/21 16:24:18 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:09:00.065 request: 00:09:00.065 { 00:09:00.065 "method": "nvmf_ns_remove_host", 00:09:00.065 "params": { 00:09:00.065 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:00.065 "nsid": 2, 00:09:00.065 "host": "nqn.2016-06.io.spdk:host1" 00:09:00.065 } 00:09:00.065 } 00:09:00.065 Got JSON-RPC error response 00:09:00.065 GoRPCClient: error on JSON-RPC call 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:00.065 [ 0]:0x2 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8aefc68e4f7c4956b941de618745efdd 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8aefc68e4f7c4956b941de618745efdd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:00.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=72580 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 72580 /var/tmp/host.sock 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 72580 ']' 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:00.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:00.065 16:24:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:09:00.324 [2024-07-21 16:24:18.336733] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:00.324 [2024-07-21 16:24:18.336849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72580 ] 00:09:00.324 [2024-07-21 16:24:18.476083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.583 [2024-07-21 16:24:18.580720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.150 16:24:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:01.150 16:24:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:01.150 16:24:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.408 16:24:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:01.665 16:24:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 6b246a5f-a2fa-4932-b998-01560ae580b8 00:09:01.666 16:24:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:01.666 16:24:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 6B246A5FA2FA4932B99801560AE580B8 -i 00:09:01.924 16:24:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 1bec956f-ad23-4f57-bb44-83b946ff8124 00:09:01.924 16:24:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:01.924 16:24:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 1BEC956FAD234F57BB4483B946FF8124 -i 00:09:02.182 16:24:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:02.440 16:24:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:09:02.699 16:24:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:02.699 16:24:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:02.957 nvme0n1 00:09:02.957 16:24:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:02.957 16:24:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:03.216 nvme1n2 00:09:03.216 16:24:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:09:03.216 16:24:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:09:03.216 16:24:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:09:03.216 16:24:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:09:03.216 16:24:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:09:03.474 16:24:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:09:03.474 16:24:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:09:03.474 16:24:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:09:03.475 16:24:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:09:03.734 16:24:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 6b246a5f-a2fa-4932-b998-01560ae580b8 == \6\b\2\4\6\a\5\f\-\a\2\f\a\-\4\9\3\2\-\b\9\9\8\-\0\1\5\6\0\a\e\5\8\0\b\8 ]] 00:09:03.734 16:24:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:09:03.734 16:24:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:09:03.734 16:24:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:09:03.993 16:24:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 1bec956f-ad23-4f57-bb44-83b946ff8124 == \1\b\e\c\9\5\6\f\-\a\d\2\3\-\4\f\5\7\-\b\b\4\4\-\8\3\b\9\4\6\f\f\8\1\2\4 ]] 00:09:03.993 16:24:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 72580 00:09:03.993 16:24:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 72580 ']' 00:09:03.993 16:24:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 72580 00:09:03.993 16:24:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:03.993 16:24:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:03.993 16:24:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72580 00:09:03.993 killing process with pid 72580 00:09:03.993 16:24:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:03.993 16:24:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:03.993 16:24:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72580' 00:09:03.993 16:24:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 72580 00:09:03.993 16:24:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 72580 00:09:04.569 16:24:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:04.827 16:24:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:09:04.827 16:24:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:09:04.827 16:24:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:04.827 16:24:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:04.827 16:24:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:04.827 16:24:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:04.827 16:24:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:04.827 16:24:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:04.827 rmmod nvme_tcp 00:09:04.827 rmmod nvme_fabrics 00:09:04.827 rmmod nvme_keyring 00:09:04.827 16:24:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:04.827 16:24:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:04.827 16:24:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:04.827 16:24:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 72203 ']' 00:09:04.827 16:24:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 72203 00:09:04.827 16:24:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 72203 ']' 00:09:04.827 16:24:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 72203 00:09:04.828 16:24:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:04.828 16:24:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:04.828 16:24:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72203 00:09:04.828 killing process with pid 72203 00:09:04.828 16:24:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:04.828 16:24:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:04.828 16:24:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72203' 00:09:04.828 16:24:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 72203 00:09:04.828 16:24:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 72203 00:09:05.085 16:24:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:05.085 16:24:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:05.085 16:24:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:05.085 16:24:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:05.085 16:24:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:05.085 16:24:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.085 16:24:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:05.085 16:24:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.085 16:24:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:05.085 00:09:05.085 real 0m17.559s 00:09:05.085 user 0m27.556s 00:09:05.085 sys 0m2.672s 00:09:05.085 16:24:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:05.085 16:24:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:05.085 ************************************ 00:09:05.085 END TEST nvmf_ns_masking 00:09:05.085 ************************************ 00:09:05.085 16:24:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:05.085 16:24:23 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:09:05.085 16:24:23 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:09:05.085 16:24:23 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:05.085 16:24:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:05.085 16:24:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.085 16:24:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:05.343 ************************************ 00:09:05.343 START TEST nvmf_host_management 00:09:05.343 ************************************ 00:09:05.343 16:24:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:05.343 * Looking for test storage... 00:09:05.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:05.344 Cannot find device "nvmf_tgt_br" 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:05.344 Cannot find device "nvmf_tgt_br2" 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:05.344 Cannot find device "nvmf_tgt_br" 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:05.344 Cannot find device "nvmf_tgt_br2" 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:05.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:05.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:05.344 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:05.602 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:05.602 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:05.602 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:05.602 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:05.602 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:05.602 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:05.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:09:05.603 00:09:05.603 --- 10.0.0.2 ping statistics --- 00:09:05.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.603 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:05.603 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:05.603 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:09:05.603 00:09:05.603 --- 10.0.0.3 ping statistics --- 00:09:05.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.603 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:05.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:05.603 00:09:05.603 --- 10.0.0.1 ping statistics --- 00:09:05.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.603 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=72934 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 72934 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 72934 ']' 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:05.603 16:24:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:05.603 [2024-07-21 16:24:23.777861] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:05.603 [2024-07-21 16:24:23.777962] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.860 [2024-07-21 16:24:23.915879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:05.860 [2024-07-21 16:24:24.015125] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:05.860 [2024-07-21 16:24:24.015199] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:05.860 [2024-07-21 16:24:24.015210] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:05.861 [2024-07-21 16:24:24.015218] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:05.861 [2024-07-21 16:24:24.015225] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:05.861 [2024-07-21 16:24:24.015563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.861 [2024-07-21 16:24:24.015907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:05.861 [2024-07-21 16:24:24.016082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:05.861 [2024-07-21 16:24:24.016092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:06.795 [2024-07-21 16:24:24.786843] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:06.795 Malloc0 00:09:06.795 [2024-07-21 16:24:24.885135] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:06.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=73006 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 73006 /var/tmp/bdevperf.sock 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 73006 ']' 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:06.795 { 00:09:06.795 "params": { 00:09:06.795 "name": "Nvme$subsystem", 00:09:06.795 "trtype": "$TEST_TRANSPORT", 00:09:06.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:06.795 "adrfam": "ipv4", 00:09:06.795 "trsvcid": "$NVMF_PORT", 00:09:06.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:06.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:06.795 "hdgst": ${hdgst:-false}, 00:09:06.795 "ddgst": ${ddgst:-false} 00:09:06.795 }, 00:09:06.795 "method": "bdev_nvme_attach_controller" 00:09:06.795 } 00:09:06.795 EOF 00:09:06.795 )") 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:06.795 16:24:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:06.795 "params": { 00:09:06.795 "name": "Nvme0", 00:09:06.795 "trtype": "tcp", 00:09:06.795 "traddr": "10.0.0.2", 00:09:06.795 "adrfam": "ipv4", 00:09:06.795 "trsvcid": "4420", 00:09:06.795 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:06.795 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:06.795 "hdgst": false, 00:09:06.795 "ddgst": false 00:09:06.795 }, 00:09:06.795 "method": "bdev_nvme_attach_controller" 00:09:06.795 }' 00:09:06.795 [2024-07-21 16:24:24.991567] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:06.795 [2024-07-21 16:24:24.991672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73006 ] 00:09:07.054 [2024-07-21 16:24:25.132120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.054 [2024-07-21 16:24:25.240889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.313 Running I/O for 10 seconds... 00:09:07.906 16:24:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:07.906 16:24:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:09:07.906 16:24:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:07.906 16:24:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.906 16:24:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:07.906 16:24:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.906 16:24:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:07.906 16:24:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:07.906 16:24:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:07.906 16:24:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:07.906 16:24:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:07.906 16:24:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:07.906 16:24:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:07.906 16:24:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:07.906 16:24:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:07.906 16:24:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:07.906 16:24:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.906 16:24:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:07.906 16:24:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.906 16:24:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:09:07.906 16:24:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:09:07.906 16:24:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:07.906 16:24:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:07.906 16:24:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:07.906 16:24:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:07.906 16:24:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.906 16:24:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:07.906 [2024-07-21 16:24:26.077109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.906 [2024-07-21 16:24:26.077208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.906 [2024-07-21 16:24:26.077230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.906 [2024-07-21 16:24:26.077238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.906 [2024-07-21 16:24:26.077246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.906 [2024-07-21 16:24:26.077254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.906 [2024-07-21 16:24:26.077291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.906 [2024-07-21 16:24:26.077312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.906 [2024-07-21 16:24:26.077319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.906 [2024-07-21 16:24:26.077327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.906 [2024-07-21 16:24:26.077335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.906 [2024-07-21 16:24:26.077342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.906 [2024-07-21 16:24:26.077350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.906 [2024-07-21 16:24:26.077359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.906 [2024-07-21 16:24:26.077366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.906 [2024-07-21 16:24:26.077374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.906 [2024-07-21 16:24:26.077382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.906 [2024-07-21 16:24:26.077390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.906 [2024-07-21 16:24:26.077398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.906 [2024-07-21 16:24:26.077406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.906 [2024-07-21 16:24:26.077413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.906 [2024-07-21 16:24:26.077421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.906 [2024-07-21 16:24:26.077428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2012300 is same with the state(5) to be set 00:09:07.907 [2024-07-21 16:24:26.077862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.907 [2024-07-21 16:24:26.077902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.907 [2024-07-21 16:24:26.077926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.907 [2024-07-21 16:24:26.077937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.907 [2024-07-21 16:24:26.077950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.907 [2024-07-21 16:24:26.077960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.907 [2024-07-21 16:24:26.077971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.907 [2024-07-21 16:24:26.077981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.907 [2024-07-21 16:24:26.078002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.907 [2024-07-21 16:24:26.078011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.907 [2024-07-21 16:24:26.078023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.907 [2024-07-21 16:24:26.078032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.907 [2024-07-21 16:24:26.078044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.907 [2024-07-21 16:24:26.078053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.907 [2024-07-21 16:24:26.078065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.907 [2024-07-21 16:24:26.078074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.907 [2024-07-21 16:24:26.078085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.907 [2024-07-21 16:24:26.078095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.907 [2024-07-21 16:24:26.078106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.907 [2024-07-21 16:24:26.078116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.907 [2024-07-21 16:24:26.078128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.907 [2024-07-21 16:24:26.078138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.907 [2024-07-21 16:24:26.078149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.907 [2024-07-21 16:24:26.078170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.907 [2024-07-21 16:24:26.078181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.907 [2024-07-21 16:24:26.078190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.907 [2024-07-21 16:24:26.078202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.907 [2024-07-21 16:24:26.078227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.907 [2024-07-21 16:24:26.078237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.907 [2024-07-21 16:24:26.078246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.907 [2024-07-21 16:24:26.078257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.907 [2024-07-21 16:24:26.078302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.907 [2024-07-21 16:24:26.078314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.907 [2024-07-21 16:24:26.078323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.907 [2024-07-21 16:24:26.078350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.907 [2024-07-21 16:24:26.078362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.907 [2024-07-21 16:24:26.078373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.907 [2024-07-21 16:24:26.078392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.907 [2024-07-21 16:24:26.078403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.907 [2024-07-21 16:24:26.078412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.907 [2024-07-21 16:24:26.078424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.907 [2024-07-21 16:24:26.078433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.907 [2024-07-21 16:24:26.078444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.907 [2024-07-21 16:24:26.078453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.907 [2024-07-21 16:24:26.078464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.907 [2024-07-21 16:24:26.078473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.907 [2024-07-21 16:24:26.078484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.907 [2024-07-21 16:24:26.078493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.078504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.078513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.078523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.078533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.078544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.078553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.078564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.078573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.078584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.078609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.078619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.078628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.078671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.078680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.078690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.078704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.078715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.078723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.078734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.078743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.078753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.078762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.078772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.078781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.078793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.078802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.078812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.078822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.078849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.078858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.078869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.078881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.078891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.078901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.078911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.078920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.078931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.078940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.078951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.078960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.078971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.078979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.078990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.078999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.079014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.079023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.079033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.079063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.079075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.079084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.079095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.079104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.079115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.079125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.079135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.079145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.079156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.079166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.079177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.079186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.079197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.079206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.079217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.079227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.079238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.079248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.079259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.079268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.079279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.079288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.079300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.079309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.079320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.079329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.079349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.079360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.079371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.079380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.079392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.908 [2024-07-21 16:24:26.079405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.908 [2024-07-21 16:24:26.079416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x813820 is same with the state(5) to be set 00:09:07.908 16:24:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.908 [2024-07-21 16:24:26.079482] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x813820 was disconnected and freed. reset controller. 00:09:07.908 16:24:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:07.908 16:24:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:07.908 16:24:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:07.908 [2024-07-21 16:24:26.080684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:07.909 task offset: 122880 on job bdev=Nvme0n1 fails 00:09:07.909 00:09:07.909 Latency(us) 00:09:07.909 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.909 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:07.909 Job: Nvme0n1 ended in about 0.65 seconds with error 00:09:07.909 Verification LBA range: start 0x0 length 0x400 00:09:07.909 Nvme0n1 : 0.65 1475.33 92.21 98.36 0.00 39646.91 4974.78 37653.41 00:09:07.909 =================================================================================================================== 00:09:07.909 Total : 1475.33 92.21 98.36 0.00 39646.91 4974.78 37653.41 00:09:07.909 [2024-07-21 16:24:26.082745] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:07.909 [2024-07-21 16:24:26.082775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x813af0 (9): Bad file descriptor 00:09:07.909 16:24:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:07.909 16:24:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:07.909 [2024-07-21 16:24:26.088108] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:09.300 16:24:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 73006 00:09:09.300 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (73006) - No such process 00:09:09.300 16:24:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:09.300 16:24:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:09.300 16:24:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:09.300 16:24:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:09.300 16:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:09.300 16:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:09.300 16:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:09.300 16:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:09.300 { 00:09:09.300 "params": { 00:09:09.300 "name": "Nvme$subsystem", 00:09:09.300 "trtype": "$TEST_TRANSPORT", 00:09:09.300 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:09.300 "adrfam": "ipv4", 00:09:09.300 "trsvcid": "$NVMF_PORT", 00:09:09.300 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:09.300 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:09.300 "hdgst": ${hdgst:-false}, 00:09:09.300 "ddgst": ${ddgst:-false} 00:09:09.301 }, 00:09:09.301 "method": "bdev_nvme_attach_controller" 00:09:09.301 } 00:09:09.301 EOF 00:09:09.301 )") 00:09:09.301 16:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:09.301 16:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:09.301 16:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:09.301 16:24:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:09.301 "params": { 00:09:09.301 "name": "Nvme0", 00:09:09.301 "trtype": "tcp", 00:09:09.301 "traddr": "10.0.0.2", 00:09:09.301 "adrfam": "ipv4", 00:09:09.301 "trsvcid": "4420", 00:09:09.301 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:09.301 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:09.301 "hdgst": false, 00:09:09.301 "ddgst": false 00:09:09.301 }, 00:09:09.301 "method": "bdev_nvme_attach_controller" 00:09:09.301 }' 00:09:09.301 [2024-07-21 16:24:27.144965] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:09.301 [2024-07-21 16:24:27.145058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73056 ] 00:09:09.301 [2024-07-21 16:24:27.277595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.301 [2024-07-21 16:24:27.359048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.559 Running I/O for 1 seconds... 00:09:10.493 00:09:10.493 Latency(us) 00:09:10.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.493 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:10.493 Verification LBA range: start 0x0 length 0x400 00:09:10.493 Nvme0n1 : 1.01 1642.07 102.63 0.00 0.00 38269.72 4855.62 34317.03 00:09:10.493 =================================================================================================================== 00:09:10.493 Total : 1642.07 102.63 0.00 0.00 38269.72 4855.62 34317.03 00:09:10.752 16:24:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:10.752 16:24:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:10.752 16:24:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:10.752 16:24:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:10.752 16:24:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:10.752 16:24:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:10.752 16:24:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:09:10.752 16:24:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:10.752 16:24:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:09:10.752 16:24:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:10.752 16:24:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:10.752 rmmod nvme_tcp 00:09:10.752 rmmod nvme_fabrics 00:09:10.752 rmmod nvme_keyring 00:09:10.752 16:24:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:10.752 16:24:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:09:10.752 16:24:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:09:10.752 16:24:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 72934 ']' 00:09:10.752 16:24:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 72934 00:09:10.752 16:24:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 72934 ']' 00:09:10.752 16:24:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 72934 00:09:10.752 16:24:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:09:10.752 16:24:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:10.752 16:24:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72934 00:09:10.752 16:24:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:10.752 16:24:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:10.752 16:24:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72934' 00:09:10.752 killing process with pid 72934 00:09:10.752 16:24:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 72934 00:09:10.752 16:24:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 72934 00:09:11.010 [2024-07-21 16:24:29.192252] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:11.268 16:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:11.268 16:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:11.268 16:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:11.268 16:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:11.268 16:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:11.268 16:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.268 16:24:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:11.268 16:24:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.268 16:24:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:11.268 16:24:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:11.268 00:09:11.268 real 0m5.962s 00:09:11.268 user 0m23.267s 00:09:11.268 sys 0m1.477s 00:09:11.268 16:24:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:11.268 16:24:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:11.268 ************************************ 00:09:11.268 END TEST nvmf_host_management 00:09:11.268 ************************************ 00:09:11.268 16:24:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:11.268 16:24:29 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:11.268 16:24:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:11.268 16:24:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.268 16:24:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:11.268 ************************************ 00:09:11.268 START TEST nvmf_lvol 00:09:11.268 ************************************ 00:09:11.268 16:24:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:11.268 * Looking for test storage... 00:09:11.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:11.268 16:24:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:11.268 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:11.268 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.268 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.268 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.268 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.268 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.268 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.268 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.268 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.268 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.268 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.268 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:09:11.268 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:09:11.268 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.268 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.268 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:11.268 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.268 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:11.268 16:24:29 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.268 16:24:29 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.268 16:24:29 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.268 16:24:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.268 16:24:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.268 16:24:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.268 16:24:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:11.269 Cannot find device "nvmf_tgt_br" 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:11.269 Cannot find device "nvmf_tgt_br2" 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:11.269 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:11.269 Cannot find device "nvmf_tgt_br" 00:09:11.526 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:09:11.526 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:11.526 Cannot find device "nvmf_tgt_br2" 00:09:11.526 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:09:11.526 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:11.526 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:11.526 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:11.526 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:11.526 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:09:11.526 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:11.526 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:11.526 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:09:11.526 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:11.526 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:11.526 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:11.526 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:11.526 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:11.526 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:11.526 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:11.526 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:11.526 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:11.526 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:11.526 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:11.526 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:11.526 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:11.526 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:11.526 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:11.526 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:11.526 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:11.526 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:11.526 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:11.526 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:11.784 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:11.784 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:11.784 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:11.784 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:11.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:11.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:09:11.784 00:09:11.784 --- 10.0.0.2 ping statistics --- 00:09:11.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.784 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:09:11.784 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:11.784 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:11.784 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:09:11.784 00:09:11.784 --- 10.0.0.3 ping statistics --- 00:09:11.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.784 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:09:11.784 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:11.784 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:11.784 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:09:11.784 00:09:11.784 --- 10.0.0.1 ping statistics --- 00:09:11.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.784 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:11.784 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:11.784 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:09:11.784 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:11.784 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:11.784 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:11.784 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:11.784 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:11.784 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:11.784 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:11.784 16:24:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:11.784 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:11.784 16:24:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:11.784 16:24:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:11.784 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=73267 00:09:11.784 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 73267 00:09:11.784 16:24:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:11.784 16:24:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 73267 ']' 00:09:11.784 16:24:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.784 16:24:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:11.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.784 16:24:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.784 16:24:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:11.784 16:24:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:11.784 [2024-07-21 16:24:29.835287] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:11.784 [2024-07-21 16:24:29.835372] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.784 [2024-07-21 16:24:29.985399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:12.043 [2024-07-21 16:24:30.097842] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.043 [2024-07-21 16:24:30.097904] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.043 [2024-07-21 16:24:30.097926] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.043 [2024-07-21 16:24:30.097937] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.043 [2024-07-21 16:24:30.097946] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.043 [2024-07-21 16:24:30.098094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.043 [2024-07-21 16:24:30.098247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.043 [2024-07-21 16:24:30.098258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.980 16:24:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:12.980 16:24:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:09:12.980 16:24:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:12.980 16:24:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:12.980 16:24:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:12.980 16:24:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.980 16:24:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:12.980 [2024-07-21 16:24:31.172794] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.238 16:24:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:13.495 16:24:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:13.495 16:24:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:13.754 16:24:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:13.754 16:24:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:14.012 16:24:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:14.270 16:24:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8fb75667-0b3c-409c-816f-b7b7ce579247 00:09:14.270 16:24:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8fb75667-0b3c-409c-816f-b7b7ce579247 lvol 20 00:09:14.528 16:24:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=015ae4e3-6df1-466a-b392-148a7bdb20a0 00:09:14.528 16:24:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:14.786 16:24:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 015ae4e3-6df1-466a-b392-148a7bdb20a0 00:09:15.043 16:24:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:15.043 [2024-07-21 16:24:33.200060] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.043 16:24:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:15.301 16:24:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=73421 00:09:15.301 16:24:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:15.301 16:24:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:16.672 16:24:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 015ae4e3-6df1-466a-b392-148a7bdb20a0 MY_SNAPSHOT 00:09:16.672 16:24:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=59b93009-e2c3-46bd-ab64-72af2d3bd22e 00:09:16.672 16:24:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 015ae4e3-6df1-466a-b392-148a7bdb20a0 30 00:09:16.929 16:24:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 59b93009-e2c3-46bd-ab64-72af2d3bd22e MY_CLONE 00:09:17.497 16:24:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=00f14ec1-105b-426d-b459-100ce2bcf245 00:09:17.497 16:24:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 00f14ec1-105b-426d-b459-100ce2bcf245 00:09:18.062 16:24:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 73421 00:09:26.217 Initializing NVMe Controllers 00:09:26.217 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:26.217 Controller IO queue size 128, less than required. 00:09:26.217 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:26.217 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:26.217 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:26.217 Initialization complete. Launching workers. 00:09:26.217 ======================================================== 00:09:26.217 Latency(us) 00:09:26.217 Device Information : IOPS MiB/s Average min max 00:09:26.217 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8019.75 31.33 15962.06 1889.32 110335.57 00:09:26.217 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7434.07 29.04 17223.60 4610.04 81146.31 00:09:26.217 ======================================================== 00:09:26.217 Total : 15453.82 60.37 16568.93 1889.32 110335.57 00:09:26.217 00:09:26.217 16:24:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:26.217 16:24:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 015ae4e3-6df1-466a-b392-148a7bdb20a0 00:09:26.217 16:24:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8fb75667-0b3c-409c-816f-b7b7ce579247 00:09:26.475 16:24:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:26.475 16:24:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:26.475 16:24:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:26.475 16:24:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:26.475 16:24:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:09:26.475 16:24:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:26.475 16:24:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:09:26.475 16:24:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:26.475 16:24:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:26.475 rmmod nvme_tcp 00:09:26.475 rmmod nvme_fabrics 00:09:26.475 rmmod nvme_keyring 00:09:26.475 16:24:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:26.475 16:24:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:09:26.475 16:24:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:09:26.475 16:24:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 73267 ']' 00:09:26.475 16:24:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 73267 00:09:26.475 16:24:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 73267 ']' 00:09:26.475 16:24:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 73267 00:09:26.475 16:24:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:09:26.475 16:24:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:26.475 16:24:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73267 00:09:26.475 killing process with pid 73267 00:09:26.475 16:24:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:26.475 16:24:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:26.475 16:24:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73267' 00:09:26.475 16:24:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 73267 00:09:26.476 16:24:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 73267 00:09:26.733 16:24:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:26.733 16:24:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:26.733 16:24:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:26.733 16:24:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:26.733 16:24:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:26.733 16:24:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.733 16:24:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:26.733 16:24:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.734 16:24:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:26.734 00:09:26.734 real 0m15.588s 00:09:26.734 user 1m5.446s 00:09:26.734 sys 0m3.546s 00:09:26.734 16:24:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:26.734 ************************************ 00:09:26.734 16:24:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:26.734 END TEST nvmf_lvol 00:09:26.734 ************************************ 00:09:26.734 16:24:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:26.734 16:24:44 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:26.734 16:24:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:26.734 16:24:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:26.734 16:24:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:26.992 ************************************ 00:09:26.992 START TEST nvmf_lvs_grow 00:09:26.992 ************************************ 00:09:26.992 16:24:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:26.992 * Looking for test storage... 00:09:26.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:26.992 16:24:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:26.992 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:26.992 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.992 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.992 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.992 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.992 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.992 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.992 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.992 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.992 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.992 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:26.993 Cannot find device "nvmf_tgt_br" 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:26.993 Cannot find device "nvmf_tgt_br2" 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:26.993 Cannot find device "nvmf_tgt_br" 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:26.993 Cannot find device "nvmf_tgt_br2" 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:26.993 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:26.993 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:26.993 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:27.252 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:27.252 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:27.252 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:27.252 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:27.252 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:27.252 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:27.252 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:27.252 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:27.252 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:27.252 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:27.252 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:27.252 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:27.252 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:27.252 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:27.252 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:27.252 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:27.252 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:27.252 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:27.252 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:27.252 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:27.252 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:27.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:09:27.252 00:09:27.252 --- 10.0.0.2 ping statistics --- 00:09:27.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.252 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:09:27.252 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:27.252 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:27.253 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:09:27.253 00:09:27.253 --- 10.0.0.3 ping statistics --- 00:09:27.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.253 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:27.253 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:27.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:09:27.253 00:09:27.253 --- 10.0.0.1 ping statistics --- 00:09:27.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.253 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:09:27.253 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.253 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:09:27.253 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:27.253 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.253 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:27.253 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:27.253 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.253 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:27.253 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:27.253 16:24:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:27.253 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:27.253 16:24:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:27.253 16:24:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:27.253 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=73778 00:09:27.253 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:27.253 16:24:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 73778 00:09:27.253 16:24:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 73778 ']' 00:09:27.253 16:24:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.253 16:24:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:27.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.253 16:24:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.253 16:24:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:27.253 16:24:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:27.253 [2024-07-21 16:24:45.458761] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:27.253 [2024-07-21 16:24:45.458853] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.511 [2024-07-21 16:24:45.599904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.511 [2024-07-21 16:24:45.691502] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.511 [2024-07-21 16:24:45.691561] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.511 [2024-07-21 16:24:45.691587] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.512 [2024-07-21 16:24:45.691595] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.512 [2024-07-21 16:24:45.691601] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.512 [2024-07-21 16:24:45.691640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.446 16:24:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:28.446 16:24:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:09:28.446 16:24:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:28.446 16:24:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:28.446 16:24:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:28.446 16:24:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.446 16:24:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:28.705 [2024-07-21 16:24:46.701709] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:28.705 16:24:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:28.705 16:24:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:28.705 16:24:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:28.705 16:24:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:28.705 ************************************ 00:09:28.705 START TEST lvs_grow_clean 00:09:28.705 ************************************ 00:09:28.705 16:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:09:28.705 16:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:28.705 16:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:28.705 16:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:28.705 16:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:28.705 16:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:28.705 16:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:28.705 16:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:28.705 16:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:28.705 16:24:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:28.963 16:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:28.963 16:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:29.222 16:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7813f2f1-3a50-491c-ac5e-239feb0fd24e 00:09:29.222 16:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:29.222 16:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7813f2f1-3a50-491c-ac5e-239feb0fd24e 00:09:29.480 16:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:29.480 16:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:29.480 16:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7813f2f1-3a50-491c-ac5e-239feb0fd24e lvol 150 00:09:29.739 16:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=68cb35a3-fd77-47f1-bbef-554f1a084981 00:09:29.739 16:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:29.739 16:24:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:29.997 [2024-07-21 16:24:47.992978] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:29.997 [2024-07-21 16:24:47.993074] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:29.997 true 00:09:29.997 16:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7813f2f1-3a50-491c-ac5e-239feb0fd24e 00:09:29.997 16:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:30.256 16:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:30.256 16:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:30.514 16:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 68cb35a3-fd77-47f1-bbef-554f1a084981 00:09:30.514 16:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:30.772 [2024-07-21 16:24:48.901503] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.772 16:24:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:31.030 16:24:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73941 00:09:31.030 16:24:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:31.030 16:24:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:31.030 16:24:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73941 /var/tmp/bdevperf.sock 00:09:31.030 16:24:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 73941 ']' 00:09:31.030 16:24:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:31.030 16:24:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:31.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:31.030 16:24:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:31.030 16:24:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:31.030 16:24:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:31.288 [2024-07-21 16:24:49.265508] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:31.288 [2024-07-21 16:24:49.265633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73941 ] 00:09:31.288 [2024-07-21 16:24:49.396356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.288 [2024-07-21 16:24:49.482411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.221 16:24:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:32.221 16:24:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:09:32.221 16:24:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:32.479 Nvme0n1 00:09:32.479 16:24:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:32.736 [ 00:09:32.736 { 00:09:32.736 "aliases": [ 00:09:32.736 "68cb35a3-fd77-47f1-bbef-554f1a084981" 00:09:32.736 ], 00:09:32.736 "assigned_rate_limits": { 00:09:32.736 "r_mbytes_per_sec": 0, 00:09:32.736 "rw_ios_per_sec": 0, 00:09:32.736 "rw_mbytes_per_sec": 0, 00:09:32.736 "w_mbytes_per_sec": 0 00:09:32.736 }, 00:09:32.736 "block_size": 4096, 00:09:32.736 "claimed": false, 00:09:32.736 "driver_specific": { 00:09:32.736 "mp_policy": "active_passive", 00:09:32.736 "nvme": [ 00:09:32.736 { 00:09:32.736 "ctrlr_data": { 00:09:32.736 "ana_reporting": false, 00:09:32.736 "cntlid": 1, 00:09:32.736 "firmware_revision": "24.09", 00:09:32.736 "model_number": "SPDK bdev Controller", 00:09:32.736 "multi_ctrlr": true, 00:09:32.736 "oacs": { 00:09:32.736 "firmware": 0, 00:09:32.736 "format": 0, 00:09:32.736 "ns_manage": 0, 00:09:32.736 "security": 0 00:09:32.736 }, 00:09:32.736 "serial_number": "SPDK0", 00:09:32.736 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:32.736 "vendor_id": "0x8086" 00:09:32.736 }, 00:09:32.736 "ns_data": { 00:09:32.736 "can_share": true, 00:09:32.736 "id": 1 00:09:32.736 }, 00:09:32.736 "trid": { 00:09:32.736 "adrfam": "IPv4", 00:09:32.736 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:32.736 "traddr": "10.0.0.2", 00:09:32.736 "trsvcid": "4420", 00:09:32.736 "trtype": "TCP" 00:09:32.736 }, 00:09:32.736 "vs": { 00:09:32.736 "nvme_version": "1.3" 00:09:32.736 } 00:09:32.736 } 00:09:32.736 ] 00:09:32.736 }, 00:09:32.736 "memory_domains": [ 00:09:32.736 { 00:09:32.736 "dma_device_id": "system", 00:09:32.736 "dma_device_type": 1 00:09:32.736 } 00:09:32.736 ], 00:09:32.737 "name": "Nvme0n1", 00:09:32.737 "num_blocks": 38912, 00:09:32.737 "product_name": "NVMe disk", 00:09:32.737 "supported_io_types": { 00:09:32.737 "abort": true, 00:09:32.737 "compare": true, 00:09:32.737 "compare_and_write": true, 00:09:32.737 "copy": true, 00:09:32.737 "flush": true, 00:09:32.737 "get_zone_info": false, 00:09:32.737 "nvme_admin": true, 00:09:32.737 "nvme_io": true, 00:09:32.737 "nvme_io_md": false, 00:09:32.737 "nvme_iov_md": false, 00:09:32.737 "read": true, 00:09:32.737 "reset": true, 00:09:32.737 "seek_data": false, 00:09:32.737 "seek_hole": false, 00:09:32.737 "unmap": true, 00:09:32.737 "write": true, 00:09:32.737 "write_zeroes": true, 00:09:32.737 "zcopy": false, 00:09:32.737 "zone_append": false, 00:09:32.737 "zone_management": false 00:09:32.737 }, 00:09:32.737 "uuid": "68cb35a3-fd77-47f1-bbef-554f1a084981", 00:09:32.737 "zoned": false 00:09:32.737 } 00:09:32.737 ] 00:09:32.737 16:24:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73994 00:09:32.737 16:24:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:32.737 16:24:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:32.994 Running I/O for 10 seconds... 00:09:33.925 Latency(us) 00:09:33.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.925 Nvme0n1 : 1.00 7108.00 27.77 0.00 0.00 0.00 0.00 0.00 00:09:33.925 =================================================================================================================== 00:09:33.925 Total : 7108.00 27.77 0.00 0.00 0.00 0.00 0.00 00:09:33.925 00:09:34.857 16:24:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7813f2f1-3a50-491c-ac5e-239feb0fd24e 00:09:34.857 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.857 Nvme0n1 : 2.00 7208.00 28.16 0.00 0.00 0.00 0.00 0.00 00:09:34.857 =================================================================================================================== 00:09:34.857 Total : 7208.00 28.16 0.00 0.00 0.00 0.00 0.00 00:09:34.857 00:09:35.114 true 00:09:35.114 16:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:35.114 16:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7813f2f1-3a50-491c-ac5e-239feb0fd24e 00:09:35.370 16:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:35.370 16:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:35.370 16:24:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 73994 00:09:35.942 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.942 Nvme0n1 : 3.00 7174.67 28.03 0.00 0.00 0.00 0.00 0.00 00:09:35.942 =================================================================================================================== 00:09:35.942 Total : 7174.67 28.03 0.00 0.00 0.00 0.00 0.00 00:09:35.942 00:09:36.934 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.934 Nvme0n1 : 4.00 7185.00 28.07 0.00 0.00 0.00 0.00 0.00 00:09:36.934 =================================================================================================================== 00:09:36.934 Total : 7185.00 28.07 0.00 0.00 0.00 0.00 0.00 00:09:36.934 00:09:37.867 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.867 Nvme0n1 : 5.00 7189.80 28.09 0.00 0.00 0.00 0.00 0.00 00:09:37.867 =================================================================================================================== 00:09:37.867 Total : 7189.80 28.09 0.00 0.00 0.00 0.00 0.00 00:09:37.867 00:09:38.800 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.800 Nvme0n1 : 6.00 7198.50 28.12 0.00 0.00 0.00 0.00 0.00 00:09:38.800 =================================================================================================================== 00:09:38.800 Total : 7198.50 28.12 0.00 0.00 0.00 0.00 0.00 00:09:38.800 00:09:40.174 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.174 Nvme0n1 : 7.00 7189.71 28.08 0.00 0.00 0.00 0.00 0.00 00:09:40.174 =================================================================================================================== 00:09:40.174 Total : 7189.71 28.08 0.00 0.00 0.00 0.00 0.00 00:09:40.174 00:09:40.751 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.751 Nvme0n1 : 8.00 7188.88 28.08 0.00 0.00 0.00 0.00 0.00 00:09:40.751 =================================================================================================================== 00:09:40.751 Total : 7188.88 28.08 0.00 0.00 0.00 0.00 0.00 00:09:40.751 00:09:42.126 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.126 Nvme0n1 : 9.00 7172.56 28.02 0.00 0.00 0.00 0.00 0.00 00:09:42.126 =================================================================================================================== 00:09:42.126 Total : 7172.56 28.02 0.00 0.00 0.00 0.00 0.00 00:09:42.126 00:09:43.056 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.056 Nvme0n1 : 10.00 7172.70 28.02 0.00 0.00 0.00 0.00 0.00 00:09:43.056 =================================================================================================================== 00:09:43.056 Total : 7172.70 28.02 0.00 0.00 0.00 0.00 0.00 00:09:43.056 00:09:43.056 00:09:43.056 Latency(us) 00:09:43.056 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.056 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.056 Nvme0n1 : 10.02 7174.26 28.02 0.00 0.00 17835.66 8400.52 43134.60 00:09:43.056 =================================================================================================================== 00:09:43.056 Total : 7174.26 28.02 0.00 0.00 17835.66 8400.52 43134.60 00:09:43.056 0 00:09:43.056 16:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73941 00:09:43.056 16:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 73941 ']' 00:09:43.057 16:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 73941 00:09:43.057 16:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:09:43.057 16:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:43.057 16:25:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73941 00:09:43.057 16:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:43.057 16:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:43.057 killing process with pid 73941 00:09:43.057 16:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73941' 00:09:43.057 Received shutdown signal, test time was about 10.000000 seconds 00:09:43.057 00:09:43.057 Latency(us) 00:09:43.057 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.057 =================================================================================================================== 00:09:43.057 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:43.057 16:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 73941 00:09:43.057 16:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 73941 00:09:43.314 16:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:43.571 16:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:43.828 16:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:43.828 16:25:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7813f2f1-3a50-491c-ac5e-239feb0fd24e 00:09:44.084 16:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:44.084 16:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:44.084 16:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:44.084 [2024-07-21 16:25:02.277842] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:44.342 16:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7813f2f1-3a50-491c-ac5e-239feb0fd24e 00:09:44.342 16:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:09:44.342 16:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7813f2f1-3a50-491c-ac5e-239feb0fd24e 00:09:44.342 16:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:44.342 16:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:44.342 16:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:44.342 16:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:44.342 16:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:44.342 16:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:44.342 16:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:44.342 16:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:44.342 16:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7813f2f1-3a50-491c-ac5e-239feb0fd24e 00:09:44.342 2024/07/21 16:25:02 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:7813f2f1-3a50-491c-ac5e-239feb0fd24e], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:09:44.342 request: 00:09:44.342 { 00:09:44.342 "method": "bdev_lvol_get_lvstores", 00:09:44.342 "params": { 00:09:44.342 "uuid": "7813f2f1-3a50-491c-ac5e-239feb0fd24e" 00:09:44.342 } 00:09:44.342 } 00:09:44.342 Got JSON-RPC error response 00:09:44.342 GoRPCClient: error on JSON-RPC call 00:09:44.600 16:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:09:44.600 16:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:44.600 16:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:44.600 16:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:44.600 16:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:44.600 aio_bdev 00:09:44.600 16:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 68cb35a3-fd77-47f1-bbef-554f1a084981 00:09:44.600 16:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=68cb35a3-fd77-47f1-bbef-554f1a084981 00:09:44.600 16:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:44.600 16:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:09:44.600 16:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:44.600 16:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:44.600 16:25:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:44.858 16:25:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 68cb35a3-fd77-47f1-bbef-554f1a084981 -t 2000 00:09:45.116 [ 00:09:45.116 { 00:09:45.116 "aliases": [ 00:09:45.116 "lvs/lvol" 00:09:45.116 ], 00:09:45.116 "assigned_rate_limits": { 00:09:45.116 "r_mbytes_per_sec": 0, 00:09:45.116 "rw_ios_per_sec": 0, 00:09:45.116 "rw_mbytes_per_sec": 0, 00:09:45.116 "w_mbytes_per_sec": 0 00:09:45.116 }, 00:09:45.116 "block_size": 4096, 00:09:45.116 "claimed": false, 00:09:45.116 "driver_specific": { 00:09:45.116 "lvol": { 00:09:45.116 "base_bdev": "aio_bdev", 00:09:45.116 "clone": false, 00:09:45.116 "esnap_clone": false, 00:09:45.116 "lvol_store_uuid": "7813f2f1-3a50-491c-ac5e-239feb0fd24e", 00:09:45.116 "num_allocated_clusters": 38, 00:09:45.116 "snapshot": false, 00:09:45.116 "thin_provision": false 00:09:45.116 } 00:09:45.116 }, 00:09:45.116 "name": "68cb35a3-fd77-47f1-bbef-554f1a084981", 00:09:45.116 "num_blocks": 38912, 00:09:45.116 "product_name": "Logical Volume", 00:09:45.116 "supported_io_types": { 00:09:45.116 "abort": false, 00:09:45.116 "compare": false, 00:09:45.116 "compare_and_write": false, 00:09:45.116 "copy": false, 00:09:45.116 "flush": false, 00:09:45.116 "get_zone_info": false, 00:09:45.116 "nvme_admin": false, 00:09:45.116 "nvme_io": false, 00:09:45.116 "nvme_io_md": false, 00:09:45.116 "nvme_iov_md": false, 00:09:45.116 "read": true, 00:09:45.116 "reset": true, 00:09:45.116 "seek_data": true, 00:09:45.116 "seek_hole": true, 00:09:45.116 "unmap": true, 00:09:45.116 "write": true, 00:09:45.116 "write_zeroes": true, 00:09:45.116 "zcopy": false, 00:09:45.116 "zone_append": false, 00:09:45.116 "zone_management": false 00:09:45.116 }, 00:09:45.116 "uuid": "68cb35a3-fd77-47f1-bbef-554f1a084981", 00:09:45.116 "zoned": false 00:09:45.116 } 00:09:45.116 ] 00:09:45.116 16:25:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:09:45.116 16:25:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7813f2f1-3a50-491c-ac5e-239feb0fd24e 00:09:45.116 16:25:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:45.375 16:25:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:45.375 16:25:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7813f2f1-3a50-491c-ac5e-239feb0fd24e 00:09:45.375 16:25:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:45.633 16:25:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:45.633 16:25:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 68cb35a3-fd77-47f1-bbef-554f1a084981 00:09:45.892 16:25:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7813f2f1-3a50-491c-ac5e-239feb0fd24e 00:09:46.150 16:25:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:46.408 16:25:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:46.666 ************************************ 00:09:46.666 END TEST lvs_grow_clean 00:09:46.666 ************************************ 00:09:46.666 00:09:46.666 real 0m18.071s 00:09:46.666 user 0m17.499s 00:09:46.666 sys 0m2.135s 00:09:46.666 16:25:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:46.666 16:25:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:46.666 16:25:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:09:46.666 16:25:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:46.666 16:25:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:46.666 16:25:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:46.666 16:25:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:46.666 ************************************ 00:09:46.666 START TEST lvs_grow_dirty 00:09:46.666 ************************************ 00:09:46.666 16:25:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:09:46.666 16:25:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:46.666 16:25:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:46.666 16:25:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:46.666 16:25:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:46.666 16:25:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:46.666 16:25:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:46.666 16:25:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:46.666 16:25:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:46.666 16:25:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:47.233 16:25:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:47.233 16:25:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:47.491 16:25:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=70e76572-cb89-4267-9357-d3c88745bdcd 00:09:47.491 16:25:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 70e76572-cb89-4267-9357-d3c88745bdcd 00:09:47.491 16:25:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:47.749 16:25:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:47.749 16:25:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:47.749 16:25:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 70e76572-cb89-4267-9357-d3c88745bdcd lvol 150 00:09:48.006 16:25:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=839fe885-4a92-43ab-807d-27c5bc3bf043 00:09:48.006 16:25:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:48.006 16:25:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:48.273 [2024-07-21 16:25:06.236027] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:48.273 [2024-07-21 16:25:06.236118] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:48.273 true 00:09:48.273 16:25:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:48.273 16:25:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 70e76572-cb89-4267-9357-d3c88745bdcd 00:09:48.546 16:25:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:48.546 16:25:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:48.546 16:25:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 839fe885-4a92-43ab-807d-27c5bc3bf043 00:09:48.804 16:25:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:49.062 [2024-07-21 16:25:07.136570] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:49.062 16:25:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:49.320 16:25:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=74385 00:09:49.320 16:25:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:49.320 16:25:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:49.320 16:25:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 74385 /var/tmp/bdevperf.sock 00:09:49.320 16:25:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 74385 ']' 00:09:49.320 16:25:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:49.320 16:25:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:49.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:49.320 16:25:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:49.320 16:25:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:49.320 16:25:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:49.320 [2024-07-21 16:25:07.443808] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:09:49.320 [2024-07-21 16:25:07.444477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74385 ] 00:09:49.577 [2024-07-21 16:25:07.582617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.577 [2024-07-21 16:25:07.681697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.508 16:25:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:50.508 16:25:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:09:50.508 16:25:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:50.508 Nvme0n1 00:09:50.508 16:25:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:50.764 [ 00:09:50.764 { 00:09:50.764 "aliases": [ 00:09:50.764 "839fe885-4a92-43ab-807d-27c5bc3bf043" 00:09:50.764 ], 00:09:50.764 "assigned_rate_limits": { 00:09:50.764 "r_mbytes_per_sec": 0, 00:09:50.764 "rw_ios_per_sec": 0, 00:09:50.764 "rw_mbytes_per_sec": 0, 00:09:50.764 "w_mbytes_per_sec": 0 00:09:50.764 }, 00:09:50.764 "block_size": 4096, 00:09:50.764 "claimed": false, 00:09:50.764 "driver_specific": { 00:09:50.764 "mp_policy": "active_passive", 00:09:50.764 "nvme": [ 00:09:50.764 { 00:09:50.764 "ctrlr_data": { 00:09:50.764 "ana_reporting": false, 00:09:50.764 "cntlid": 1, 00:09:50.764 "firmware_revision": "24.09", 00:09:50.764 "model_number": "SPDK bdev Controller", 00:09:50.764 "multi_ctrlr": true, 00:09:50.764 "oacs": { 00:09:50.764 "firmware": 0, 00:09:50.764 "format": 0, 00:09:50.764 "ns_manage": 0, 00:09:50.764 "security": 0 00:09:50.764 }, 00:09:50.764 "serial_number": "SPDK0", 00:09:50.764 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:50.764 "vendor_id": "0x8086" 00:09:50.764 }, 00:09:50.764 "ns_data": { 00:09:50.764 "can_share": true, 00:09:50.764 "id": 1 00:09:50.764 }, 00:09:50.764 "trid": { 00:09:50.764 "adrfam": "IPv4", 00:09:50.764 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:50.764 "traddr": "10.0.0.2", 00:09:50.764 "trsvcid": "4420", 00:09:50.764 "trtype": "TCP" 00:09:50.764 }, 00:09:50.764 "vs": { 00:09:50.764 "nvme_version": "1.3" 00:09:50.764 } 00:09:50.764 } 00:09:50.764 ] 00:09:50.764 }, 00:09:50.764 "memory_domains": [ 00:09:50.764 { 00:09:50.764 "dma_device_id": "system", 00:09:50.764 "dma_device_type": 1 00:09:50.764 } 00:09:50.764 ], 00:09:50.764 "name": "Nvme0n1", 00:09:50.764 "num_blocks": 38912, 00:09:50.764 "product_name": "NVMe disk", 00:09:50.764 "supported_io_types": { 00:09:50.764 "abort": true, 00:09:50.764 "compare": true, 00:09:50.764 "compare_and_write": true, 00:09:50.764 "copy": true, 00:09:50.764 "flush": true, 00:09:50.764 "get_zone_info": false, 00:09:50.764 "nvme_admin": true, 00:09:50.764 "nvme_io": true, 00:09:50.764 "nvme_io_md": false, 00:09:50.764 "nvme_iov_md": false, 00:09:50.764 "read": true, 00:09:50.764 "reset": true, 00:09:50.764 "seek_data": false, 00:09:50.764 "seek_hole": false, 00:09:50.764 "unmap": true, 00:09:50.764 "write": true, 00:09:50.765 "write_zeroes": true, 00:09:50.765 "zcopy": false, 00:09:50.765 "zone_append": false, 00:09:50.765 "zone_management": false 00:09:50.765 }, 00:09:50.765 "uuid": "839fe885-4a92-43ab-807d-27c5bc3bf043", 00:09:50.765 "zoned": false 00:09:50.765 } 00:09:50.765 ] 00:09:50.765 16:25:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:50.765 16:25:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=74427 00:09:50.765 16:25:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:51.021 Running I/O for 10 seconds... 00:09:51.954 Latency(us) 00:09:51.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:51.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.954 Nvme0n1 : 1.00 9321.00 36.41 0.00 0.00 0.00 0.00 0.00 00:09:51.954 =================================================================================================================== 00:09:51.954 Total : 9321.00 36.41 0.00 0.00 0.00 0.00 0.00 00:09:51.954 00:09:52.888 16:25:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 70e76572-cb89-4267-9357-d3c88745bdcd 00:09:52.888 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.888 Nvme0n1 : 2.00 9862.00 38.52 0.00 0.00 0.00 0.00 0.00 00:09:52.888 =================================================================================================================== 00:09:52.888 Total : 9862.00 38.52 0.00 0.00 0.00 0.00 0.00 00:09:52.888 00:09:53.146 true 00:09:53.146 16:25:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 70e76572-cb89-4267-9357-d3c88745bdcd 00:09:53.146 16:25:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:53.404 16:25:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:53.404 16:25:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:53.404 16:25:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 74427 00:09:53.972 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:53.972 Nvme0n1 : 3.00 10035.00 39.20 0.00 0.00 0.00 0.00 0.00 00:09:53.972 =================================================================================================================== 00:09:53.972 Total : 10035.00 39.20 0.00 0.00 0.00 0.00 0.00 00:09:53.972 00:09:54.908 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.908 Nvme0n1 : 4.00 10075.50 39.36 0.00 0.00 0.00 0.00 0.00 00:09:54.908 =================================================================================================================== 00:09:54.908 Total : 10075.50 39.36 0.00 0.00 0.00 0.00 0.00 00:09:54.908 00:09:55.840 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.840 Nvme0n1 : 5.00 10112.80 39.50 0.00 0.00 0.00 0.00 0.00 00:09:55.840 =================================================================================================================== 00:09:55.840 Total : 10112.80 39.50 0.00 0.00 0.00 0.00 0.00 00:09:55.840 00:09:57.217 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.217 Nvme0n1 : 6.00 10106.17 39.48 0.00 0.00 0.00 0.00 0.00 00:09:57.217 =================================================================================================================== 00:09:57.217 Total : 10106.17 39.48 0.00 0.00 0.00 0.00 0.00 00:09:57.217 00:09:58.152 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:58.152 Nvme0n1 : 7.00 9912.86 38.72 0.00 0.00 0.00 0.00 0.00 00:09:58.152 =================================================================================================================== 00:09:58.152 Total : 9912.86 38.72 0.00 0.00 0.00 0.00 0.00 00:09:58.152 00:09:59.096 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:59.097 Nvme0n1 : 8.00 9329.75 36.44 0.00 0.00 0.00 0.00 0.00 00:09:59.097 =================================================================================================================== 00:09:59.097 Total : 9329.75 36.44 0.00 0.00 0.00 0.00 0.00 00:09:59.097 00:10:00.044 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:00.044 Nvme0n1 : 9.00 9082.78 35.48 0.00 0.00 0.00 0.00 0.00 00:10:00.044 =================================================================================================================== 00:10:00.044 Total : 9082.78 35.48 0.00 0.00 0.00 0.00 0.00 00:10:00.044 00:10:00.979 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:00.979 Nvme0n1 : 10.00 8878.90 34.68 0.00 0.00 0.00 0.00 0.00 00:10:00.979 =================================================================================================================== 00:10:00.979 Total : 8878.90 34.68 0.00 0.00 0.00 0.00 0.00 00:10:00.979 00:10:00.979 00:10:00.979 Latency(us) 00:10:00.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:00.979 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:00.979 Nvme0n1 : 10.01 8881.27 34.69 0.00 0.00 14407.15 5987.61 301227.29 00:10:00.979 =================================================================================================================== 00:10:00.979 Total : 8881.27 34.69 0.00 0.00 14407.15 5987.61 301227.29 00:10:00.979 0 00:10:00.979 16:25:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 74385 00:10:00.979 16:25:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 74385 ']' 00:10:00.979 16:25:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 74385 00:10:00.979 16:25:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:10:00.979 16:25:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:00.979 16:25:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74385 00:10:00.979 16:25:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:00.979 16:25:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:00.979 killing process with pid 74385 00:10:00.979 16:25:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74385' 00:10:00.979 16:25:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 74385 00:10:00.979 Received shutdown signal, test time was about 10.000000 seconds 00:10:00.979 00:10:00.979 Latency(us) 00:10:00.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:00.979 =================================================================================================================== 00:10:00.979 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:00.979 16:25:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 74385 00:10:01.237 16:25:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:01.494 16:25:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:01.752 16:25:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 70e76572-cb89-4267-9357-d3c88745bdcd 00:10:01.752 16:25:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:02.010 16:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:02.010 16:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:02.010 16:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 73778 00:10:02.010 16:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 73778 00:10:02.010 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 73778 Killed "${NVMF_APP[@]}" "$@" 00:10:02.010 16:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:02.010 16:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:02.010 16:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:02.010 16:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:02.010 16:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:02.010 16:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=74590 00:10:02.010 16:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:02.010 16:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 74590 00:10:02.010 16:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 74590 ']' 00:10:02.010 16:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.010 16:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:02.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.010 16:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.010 16:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:02.010 16:25:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:02.010 [2024-07-21 16:25:20.149578] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:10:02.010 [2024-07-21 16:25:20.149659] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.268 [2024-07-21 16:25:20.279902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.268 [2024-07-21 16:25:20.390819] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:02.268 [2024-07-21 16:25:20.390881] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:02.268 [2024-07-21 16:25:20.390907] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:02.268 [2024-07-21 16:25:20.390914] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:02.268 [2024-07-21 16:25:20.390921] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:02.268 [2024-07-21 16:25:20.390942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.199 16:25:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:03.199 16:25:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:10:03.199 16:25:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:03.199 16:25:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:03.199 16:25:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:03.199 16:25:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:03.199 16:25:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:03.199 [2024-07-21 16:25:21.269721] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:03.199 [2024-07-21 16:25:21.270049] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:03.199 [2024-07-21 16:25:21.270201] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:03.199 16:25:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:03.199 16:25:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 839fe885-4a92-43ab-807d-27c5bc3bf043 00:10:03.199 16:25:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=839fe885-4a92-43ab-807d-27c5bc3bf043 00:10:03.199 16:25:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:03.199 16:25:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:10:03.199 16:25:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:03.199 16:25:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:03.199 16:25:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:03.457 16:25:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 839fe885-4a92-43ab-807d-27c5bc3bf043 -t 2000 00:10:03.714 [ 00:10:03.714 { 00:10:03.714 "aliases": [ 00:10:03.714 "lvs/lvol" 00:10:03.714 ], 00:10:03.714 "assigned_rate_limits": { 00:10:03.714 "r_mbytes_per_sec": 0, 00:10:03.714 "rw_ios_per_sec": 0, 00:10:03.714 "rw_mbytes_per_sec": 0, 00:10:03.714 "w_mbytes_per_sec": 0 00:10:03.714 }, 00:10:03.714 "block_size": 4096, 00:10:03.714 "claimed": false, 00:10:03.714 "driver_specific": { 00:10:03.714 "lvol": { 00:10:03.714 "base_bdev": "aio_bdev", 00:10:03.714 "clone": false, 00:10:03.714 "esnap_clone": false, 00:10:03.714 "lvol_store_uuid": "70e76572-cb89-4267-9357-d3c88745bdcd", 00:10:03.714 "num_allocated_clusters": 38, 00:10:03.714 "snapshot": false, 00:10:03.714 "thin_provision": false 00:10:03.714 } 00:10:03.714 }, 00:10:03.714 "name": "839fe885-4a92-43ab-807d-27c5bc3bf043", 00:10:03.714 "num_blocks": 38912, 00:10:03.714 "product_name": "Logical Volume", 00:10:03.714 "supported_io_types": { 00:10:03.714 "abort": false, 00:10:03.714 "compare": false, 00:10:03.714 "compare_and_write": false, 00:10:03.714 "copy": false, 00:10:03.714 "flush": false, 00:10:03.714 "get_zone_info": false, 00:10:03.714 "nvme_admin": false, 00:10:03.714 "nvme_io": false, 00:10:03.714 "nvme_io_md": false, 00:10:03.714 "nvme_iov_md": false, 00:10:03.714 "read": true, 00:10:03.714 "reset": true, 00:10:03.714 "seek_data": true, 00:10:03.714 "seek_hole": true, 00:10:03.714 "unmap": true, 00:10:03.714 "write": true, 00:10:03.714 "write_zeroes": true, 00:10:03.714 "zcopy": false, 00:10:03.714 "zone_append": false, 00:10:03.714 "zone_management": false 00:10:03.714 }, 00:10:03.714 "uuid": "839fe885-4a92-43ab-807d-27c5bc3bf043", 00:10:03.714 "zoned": false 00:10:03.714 } 00:10:03.714 ] 00:10:03.714 16:25:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:10:03.714 16:25:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 70e76572-cb89-4267-9357-d3c88745bdcd 00:10:03.714 16:25:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:03.972 16:25:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:03.972 16:25:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 70e76572-cb89-4267-9357-d3c88745bdcd 00:10:03.972 16:25:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:04.230 16:25:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:04.230 16:25:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:04.489 [2024-07-21 16:25:22.567198] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:04.489 16:25:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 70e76572-cb89-4267-9357-d3c88745bdcd 00:10:04.489 16:25:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:10:04.489 16:25:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 70e76572-cb89-4267-9357-d3c88745bdcd 00:10:04.489 16:25:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:04.489 16:25:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:04.489 16:25:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:04.489 16:25:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:04.489 16:25:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:04.489 16:25:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:04.489 16:25:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:04.489 16:25:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:04.489 16:25:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 70e76572-cb89-4267-9357-d3c88745bdcd 00:10:04.747 2024/07/21 16:25:22 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:70e76572-cb89-4267-9357-d3c88745bdcd], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:10:04.747 request: 00:10:04.747 { 00:10:04.747 "method": "bdev_lvol_get_lvstores", 00:10:04.747 "params": { 00:10:04.747 "uuid": "70e76572-cb89-4267-9357-d3c88745bdcd" 00:10:04.747 } 00:10:04.747 } 00:10:04.747 Got JSON-RPC error response 00:10:04.747 GoRPCClient: error on JSON-RPC call 00:10:04.747 16:25:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:10:04.747 16:25:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:04.747 16:25:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:04.747 16:25:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:04.747 16:25:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:05.005 aio_bdev 00:10:05.005 16:25:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 839fe885-4a92-43ab-807d-27c5bc3bf043 00:10:05.005 16:25:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=839fe885-4a92-43ab-807d-27c5bc3bf043 00:10:05.005 16:25:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:05.005 16:25:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:10:05.005 16:25:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:05.005 16:25:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:05.005 16:25:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:05.263 16:25:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 839fe885-4a92-43ab-807d-27c5bc3bf043 -t 2000 00:10:05.522 [ 00:10:05.522 { 00:10:05.522 "aliases": [ 00:10:05.522 "lvs/lvol" 00:10:05.522 ], 00:10:05.522 "assigned_rate_limits": { 00:10:05.522 "r_mbytes_per_sec": 0, 00:10:05.522 "rw_ios_per_sec": 0, 00:10:05.522 "rw_mbytes_per_sec": 0, 00:10:05.522 "w_mbytes_per_sec": 0 00:10:05.522 }, 00:10:05.522 "block_size": 4096, 00:10:05.522 "claimed": false, 00:10:05.522 "driver_specific": { 00:10:05.522 "lvol": { 00:10:05.522 "base_bdev": "aio_bdev", 00:10:05.522 "clone": false, 00:10:05.522 "esnap_clone": false, 00:10:05.522 "lvol_store_uuid": "70e76572-cb89-4267-9357-d3c88745bdcd", 00:10:05.522 "num_allocated_clusters": 38, 00:10:05.522 "snapshot": false, 00:10:05.522 "thin_provision": false 00:10:05.522 } 00:10:05.522 }, 00:10:05.522 "name": "839fe885-4a92-43ab-807d-27c5bc3bf043", 00:10:05.522 "num_blocks": 38912, 00:10:05.522 "product_name": "Logical Volume", 00:10:05.522 "supported_io_types": { 00:10:05.522 "abort": false, 00:10:05.522 "compare": false, 00:10:05.522 "compare_and_write": false, 00:10:05.522 "copy": false, 00:10:05.522 "flush": false, 00:10:05.522 "get_zone_info": false, 00:10:05.522 "nvme_admin": false, 00:10:05.522 "nvme_io": false, 00:10:05.522 "nvme_io_md": false, 00:10:05.522 "nvme_iov_md": false, 00:10:05.522 "read": true, 00:10:05.522 "reset": true, 00:10:05.522 "seek_data": true, 00:10:05.522 "seek_hole": true, 00:10:05.522 "unmap": true, 00:10:05.522 "write": true, 00:10:05.522 "write_zeroes": true, 00:10:05.522 "zcopy": false, 00:10:05.522 "zone_append": false, 00:10:05.522 "zone_management": false 00:10:05.522 }, 00:10:05.522 "uuid": "839fe885-4a92-43ab-807d-27c5bc3bf043", 00:10:05.522 "zoned": false 00:10:05.522 } 00:10:05.522 ] 00:10:05.522 16:25:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:10:05.522 16:25:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 70e76572-cb89-4267-9357-d3c88745bdcd 00:10:05.522 16:25:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:05.779 16:25:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:05.779 16:25:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:05.779 16:25:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 70e76572-cb89-4267-9357-d3c88745bdcd 00:10:06.037 16:25:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:06.037 16:25:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 839fe885-4a92-43ab-807d-27c5bc3bf043 00:10:06.294 16:25:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 70e76572-cb89-4267-9357-d3c88745bdcd 00:10:06.552 16:25:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:06.809 16:25:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:07.066 00:10:07.066 real 0m20.392s 00:10:07.066 user 0m40.715s 00:10:07.066 sys 0m9.551s 00:10:07.066 16:25:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:07.066 ************************************ 00:10:07.066 END TEST lvs_grow_dirty 00:10:07.066 16:25:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:07.066 ************************************ 00:10:07.323 16:25:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:10:07.323 16:25:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:07.323 16:25:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:10:07.323 16:25:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:10:07.323 16:25:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:10:07.323 16:25:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:07.323 16:25:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:10:07.323 16:25:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:10:07.323 16:25:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:10:07.323 16:25:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:07.323 nvmf_trace.0 00:10:07.323 16:25:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:10:07.323 16:25:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:07.323 16:25:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:07.323 16:25:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:10:07.581 16:25:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:07.581 16:25:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:10:07.581 16:25:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:07.581 16:25:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:07.581 rmmod nvme_tcp 00:10:07.581 rmmod nvme_fabrics 00:10:07.581 rmmod nvme_keyring 00:10:07.581 16:25:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:07.581 16:25:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:10:07.581 16:25:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:10:07.581 16:25:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 74590 ']' 00:10:07.581 16:25:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 74590 00:10:07.581 16:25:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 74590 ']' 00:10:07.581 16:25:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 74590 00:10:07.581 16:25:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:10:07.581 16:25:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:07.581 16:25:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74590 00:10:07.581 16:25:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:07.581 killing process with pid 74590 00:10:07.581 16:25:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:07.581 16:25:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74590' 00:10:07.581 16:25:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 74590 00:10:07.581 16:25:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 74590 00:10:07.859 16:25:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:07.859 16:25:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:07.859 16:25:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:07.859 16:25:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:07.859 16:25:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:07.859 16:25:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.859 16:25:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:07.859 16:25:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.859 16:25:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:07.859 00:10:07.859 real 0m40.958s 00:10:07.859 user 1m4.528s 00:10:07.859 sys 0m12.449s 00:10:07.859 ************************************ 00:10:07.859 END TEST nvmf_lvs_grow 00:10:07.859 ************************************ 00:10:07.859 16:25:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:07.859 16:25:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:07.859 16:25:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:07.859 16:25:25 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:07.859 16:25:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:07.859 16:25:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:07.859 16:25:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:07.859 ************************************ 00:10:07.859 START TEST nvmf_bdev_io_wait 00:10:07.859 ************************************ 00:10:07.859 16:25:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:07.859 * Looking for test storage... 00:10:07.859 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:07.859 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:08.117 Cannot find device "nvmf_tgt_br" 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:08.117 Cannot find device "nvmf_tgt_br2" 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:08.117 Cannot find device "nvmf_tgt_br" 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:08.117 Cannot find device "nvmf_tgt_br2" 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:08.117 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:08.117 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:08.117 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:08.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:10:08.374 00:10:08.374 --- 10.0.0.2 ping statistics --- 00:10:08.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.374 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:08.374 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:08.374 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:10:08.374 00:10:08.374 --- 10.0.0.3 ping statistics --- 00:10:08.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.374 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:08.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:08.374 00:10:08.374 --- 10.0.0.1 ping statistics --- 00:10:08.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.374 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=75002 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 75002 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 75002 ']' 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:08.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:08.374 16:25:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.374 [2024-07-21 16:25:26.476659] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:10:08.374 [2024-07-21 16:25:26.476737] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.631 [2024-07-21 16:25:26.611440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:08.631 [2024-07-21 16:25:26.710491] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.631 [2024-07-21 16:25:26.710555] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.631 [2024-07-21 16:25:26.710581] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:08.631 [2024-07-21 16:25:26.710589] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:08.631 [2024-07-21 16:25:26.710612] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.632 [2024-07-21 16:25:26.710759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.632 [2024-07-21 16:25:26.711069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:08.632 [2024-07-21 16:25:26.711556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:08.632 [2024-07-21 16:25:26.711567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:09.574 [2024-07-21 16:25:27.616035] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:09.574 Malloc0 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:09.574 [2024-07-21 16:25:27.690488] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=75061 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=75063 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:09.574 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:09.574 { 00:10:09.574 "params": { 00:10:09.574 "name": "Nvme$subsystem", 00:10:09.574 "trtype": "$TEST_TRANSPORT", 00:10:09.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:09.574 "adrfam": "ipv4", 00:10:09.574 "trsvcid": "$NVMF_PORT", 00:10:09.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:09.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:09.575 "hdgst": ${hdgst:-false}, 00:10:09.575 "ddgst": ${ddgst:-false} 00:10:09.575 }, 00:10:09.575 "method": "bdev_nvme_attach_controller" 00:10:09.575 } 00:10:09.575 EOF 00:10:09.575 )") 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=75065 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:09.575 { 00:10:09.575 "params": { 00:10:09.575 "name": "Nvme$subsystem", 00:10:09.575 "trtype": "$TEST_TRANSPORT", 00:10:09.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:09.575 "adrfam": "ipv4", 00:10:09.575 "trsvcid": "$NVMF_PORT", 00:10:09.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:09.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:09.575 "hdgst": ${hdgst:-false}, 00:10:09.575 "ddgst": ${ddgst:-false} 00:10:09.575 }, 00:10:09.575 "method": "bdev_nvme_attach_controller" 00:10:09.575 } 00:10:09.575 EOF 00:10:09.575 )") 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=75067 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:09.575 { 00:10:09.575 "params": { 00:10:09.575 "name": "Nvme$subsystem", 00:10:09.575 "trtype": "$TEST_TRANSPORT", 00:10:09.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:09.575 "adrfam": "ipv4", 00:10:09.575 "trsvcid": "$NVMF_PORT", 00:10:09.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:09.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:09.575 "hdgst": ${hdgst:-false}, 00:10:09.575 "ddgst": ${ddgst:-false} 00:10:09.575 }, 00:10:09.575 "method": "bdev_nvme_attach_controller" 00:10:09.575 } 00:10:09.575 EOF 00:10:09.575 )") 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:09.575 { 00:10:09.575 "params": { 00:10:09.575 "name": "Nvme$subsystem", 00:10:09.575 "trtype": "$TEST_TRANSPORT", 00:10:09.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:09.575 "adrfam": "ipv4", 00:10:09.575 "trsvcid": "$NVMF_PORT", 00:10:09.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:09.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:09.575 "hdgst": ${hdgst:-false}, 00:10:09.575 "ddgst": ${ddgst:-false} 00:10:09.575 }, 00:10:09.575 "method": "bdev_nvme_attach_controller" 00:10:09.575 } 00:10:09.575 EOF 00:10:09.575 )") 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:09.575 "params": { 00:10:09.575 "name": "Nvme1", 00:10:09.575 "trtype": "tcp", 00:10:09.575 "traddr": "10.0.0.2", 00:10:09.575 "adrfam": "ipv4", 00:10:09.575 "trsvcid": "4420", 00:10:09.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:09.575 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:09.575 "hdgst": false, 00:10:09.575 "ddgst": false 00:10:09.575 }, 00:10:09.575 "method": "bdev_nvme_attach_controller" 00:10:09.575 }' 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:09.575 "params": { 00:10:09.575 "name": "Nvme1", 00:10:09.575 "trtype": "tcp", 00:10:09.575 "traddr": "10.0.0.2", 00:10:09.575 "adrfam": "ipv4", 00:10:09.575 "trsvcid": "4420", 00:10:09.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:09.575 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:09.575 "hdgst": false, 00:10:09.575 "ddgst": false 00:10:09.575 }, 00:10:09.575 "method": "bdev_nvme_attach_controller" 00:10:09.575 }' 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:09.575 "params": { 00:10:09.575 "name": "Nvme1", 00:10:09.575 "trtype": "tcp", 00:10:09.575 "traddr": "10.0.0.2", 00:10:09.575 "adrfam": "ipv4", 00:10:09.575 "trsvcid": "4420", 00:10:09.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:09.575 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:09.575 "hdgst": false, 00:10:09.575 "ddgst": false 00:10:09.575 }, 00:10:09.575 "method": "bdev_nvme_attach_controller" 00:10:09.575 }' 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:09.575 "params": { 00:10:09.575 "name": "Nvme1", 00:10:09.575 "trtype": "tcp", 00:10:09.575 "traddr": "10.0.0.2", 00:10:09.575 "adrfam": "ipv4", 00:10:09.575 "trsvcid": "4420", 00:10:09.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:09.575 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:09.575 "hdgst": false, 00:10:09.575 "ddgst": false 00:10:09.575 }, 00:10:09.575 "method": "bdev_nvme_attach_controller" 00:10:09.575 }' 00:10:09.575 [2024-07-21 16:25:27.759396] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:10:09.575 [2024-07-21 16:25:27.759489] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:09.575 16:25:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 75061 00:10:09.575 [2024-07-21 16:25:27.777044] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:10:09.575 [2024-07-21 16:25:27.777518] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:09.834 [2024-07-21 16:25:27.781836] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:10:09.834 [2024-07-21 16:25:27.782090] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:09.834 [2024-07-21 16:25:27.788301] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:10:09.834 [2024-07-21 16:25:27.788379] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:09.834 [2024-07-21 16:25:27.971563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.092 [2024-07-21 16:25:28.047469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.092 [2024-07-21 16:25:28.079737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:10.092 [2024-07-21 16:25:28.156386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.092 [2024-07-21 16:25:28.156778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:10.092 [2024-07-21 16:25:28.230253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.092 Running I/O for 1 seconds... 00:10:10.092 [2024-07-21 16:25:28.260445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:10.350 Running I/O for 1 seconds... 00:10:10.350 [2024-07-21 16:25:28.338554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:10:10.350 Running I/O for 1 seconds... 00:10:10.350 Running I/O for 1 seconds... 00:10:11.285 00:10:11.285 Latency(us) 00:10:11.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:11.285 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:11.285 Nvme1n1 : 1.02 5248.27 20.50 0.00 0.00 24080.16 7626.01 42419.67 00:10:11.285 =================================================================================================================== 00:10:11.285 Total : 5248.27 20.50 0.00 0.00 24080.16 7626.01 42419.67 00:10:11.285 00:10:11.285 Latency(us) 00:10:11.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:11.285 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:11.285 Nvme1n1 : 1.01 5800.37 22.66 0.00 0.00 21902.23 11796.48 35031.97 00:10:11.285 =================================================================================================================== 00:10:11.285 Total : 5800.37 22.66 0.00 0.00 21902.23 11796.48 35031.97 00:10:11.285 00:10:11.285 Latency(us) 00:10:11.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:11.285 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:11.285 Nvme1n1 : 1.00 190236.49 743.11 0.00 0.00 670.02 364.92 1735.21 00:10:11.285 =================================================================================================================== 00:10:11.285 Total : 190236.49 743.11 0.00 0.00 670.02 364.92 1735.21 00:10:11.543 00:10:11.543 Latency(us) 00:10:11.543 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:11.543 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:11.543 Nvme1n1 : 1.01 6090.07 23.79 0.00 0.00 20953.06 5630.14 61008.06 00:10:11.543 =================================================================================================================== 00:10:11.543 Total : 6090.07 23.79 0.00 0.00 20953.06 5630.14 61008.06 00:10:11.543 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 75063 00:10:11.543 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 75065 00:10:11.802 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 75067 00:10:11.802 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:11.802 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.802 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:11.802 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.802 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:11.802 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:11.802 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:11.802 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:10:11.802 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:11.802 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:10:11.802 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:11.802 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:11.802 rmmod nvme_tcp 00:10:11.802 rmmod nvme_fabrics 00:10:11.802 rmmod nvme_keyring 00:10:11.802 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:11.802 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:10:11.802 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:10:11.802 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 75002 ']' 00:10:11.802 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 75002 00:10:11.802 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 75002 ']' 00:10:11.802 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 75002 00:10:11.802 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:10:11.802 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:11.802 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75002 00:10:11.802 killing process with pid 75002 00:10:11.802 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:11.802 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:11.802 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75002' 00:10:11.802 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 75002 00:10:11.802 16:25:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 75002 00:10:12.062 16:25:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:12.062 16:25:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:12.062 16:25:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:12.062 16:25:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:12.062 16:25:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:12.062 16:25:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.062 16:25:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:12.062 16:25:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.062 16:25:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:12.062 00:10:12.062 real 0m4.286s 00:10:12.062 user 0m19.512s 00:10:12.062 sys 0m1.884s 00:10:12.062 16:25:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:12.062 16:25:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:12.062 ************************************ 00:10:12.062 END TEST nvmf_bdev_io_wait 00:10:12.062 ************************************ 00:10:12.321 16:25:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:12.321 16:25:30 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:12.321 16:25:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:12.321 16:25:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:12.321 16:25:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:12.321 ************************************ 00:10:12.321 START TEST nvmf_queue_depth 00:10:12.321 ************************************ 00:10:12.321 16:25:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:12.321 * Looking for test storage... 00:10:12.321 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:12.321 16:25:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:12.321 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:12.321 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.321 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.321 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.321 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.321 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.321 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.321 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.321 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.321 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.321 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.321 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:10:12.321 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:10:12.321 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.321 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.321 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:12.321 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.321 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:12.321 16:25:30 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.321 16:25:30 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.321 16:25:30 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:12.322 Cannot find device "nvmf_tgt_br" 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:12.322 Cannot find device "nvmf_tgt_br2" 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:12.322 Cannot find device "nvmf_tgt_br" 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:12.322 Cannot find device "nvmf_tgt_br2" 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:12.322 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:12.322 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:10:12.322 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:12.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:10:12.581 00:10:12.581 --- 10.0.0.2 ping statistics --- 00:10:12.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.581 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:12.581 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:12.581 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:10:12.581 00:10:12.581 --- 10.0.0.3 ping statistics --- 00:10:12.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.581 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:12.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:10:12.581 00:10:12.581 --- 10.0.0.1 ping statistics --- 00:10:12.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.581 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=75299 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 75299 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 75299 ']' 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:12.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:12.581 16:25:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:12.839 [2024-07-21 16:25:30.797099] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:10:12.839 [2024-07-21 16:25:30.797204] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.839 [2024-07-21 16:25:30.938334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.839 [2024-07-21 16:25:31.046500] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.839 [2024-07-21 16:25:31.046583] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.839 [2024-07-21 16:25:31.046604] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.839 [2024-07-21 16:25:31.046612] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.839 [2024-07-21 16:25:31.046618] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.839 [2024-07-21 16:25:31.046642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.772 16:25:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:13.772 16:25:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:10:13.772 16:25:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:13.772 16:25:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:13.772 16:25:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.772 16:25:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.772 16:25:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:13.772 16:25:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.773 16:25:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.773 [2024-07-21 16:25:31.853940] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.773 16:25:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.773 16:25:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:13.773 16:25:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.773 16:25:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.773 Malloc0 00:10:13.773 16:25:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.773 16:25:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:13.773 16:25:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.773 16:25:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.773 16:25:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.773 16:25:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:13.773 16:25:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.773 16:25:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.773 16:25:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.773 16:25:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:13.773 16:25:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.773 16:25:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:13.773 [2024-07-21 16:25:31.918838] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.773 16:25:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.773 16:25:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=75349 00:10:13.773 16:25:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:13.773 16:25:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:13.773 16:25:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 75349 /var/tmp/bdevperf.sock 00:10:13.773 16:25:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 75349 ']' 00:10:13.773 16:25:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:13.773 16:25:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:13.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:13.773 16:25:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:13.773 16:25:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:13.773 16:25:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:14.030 [2024-07-21 16:25:31.985035] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:10:14.030 [2024-07-21 16:25:31.985153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75349 ] 00:10:14.030 [2024-07-21 16:25:32.127905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.287 [2024-07-21 16:25:32.251859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.852 16:25:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:14.852 16:25:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:10:14.852 16:25:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:14.852 16:25:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.852 16:25:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:14.852 NVMe0n1 00:10:14.852 16:25:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.852 16:25:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:15.109 Running I/O for 10 seconds... 00:10:25.123 00:10:25.123 Latency(us) 00:10:25.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.123 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:25.123 Verification LBA range: start 0x0 length 0x4000 00:10:25.123 NVMe0n1 : 10.06 10591.00 41.37 0.00 0.00 96341.65 18230.92 96278.34 00:10:25.123 =================================================================================================================== 00:10:25.123 Total : 10591.00 41.37 0.00 0.00 96341.65 18230.92 96278.34 00:10:25.123 0 00:10:25.123 16:25:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 75349 00:10:25.123 16:25:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 75349 ']' 00:10:25.123 16:25:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 75349 00:10:25.123 16:25:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:10:25.123 16:25:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:25.123 16:25:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75349 00:10:25.123 16:25:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:25.123 16:25:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:25.123 killing process with pid 75349 00:10:25.123 16:25:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75349' 00:10:25.123 16:25:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 75349 00:10:25.123 Received shutdown signal, test time was about 10.000000 seconds 00:10:25.123 00:10:25.123 Latency(us) 00:10:25.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.123 =================================================================================================================== 00:10:25.123 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:25.123 16:25:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 75349 00:10:25.381 16:25:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:25.381 16:25:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:25.381 16:25:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:25.381 16:25:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:10:25.381 16:25:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:25.381 16:25:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:10:25.381 16:25:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:25.381 16:25:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:25.381 rmmod nvme_tcp 00:10:25.381 rmmod nvme_fabrics 00:10:25.381 rmmod nvme_keyring 00:10:25.381 16:25:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:25.381 16:25:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:10:25.381 16:25:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:10:25.381 16:25:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 75299 ']' 00:10:25.381 16:25:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 75299 00:10:25.381 16:25:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 75299 ']' 00:10:25.381 16:25:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 75299 00:10:25.381 16:25:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:10:25.381 16:25:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:25.639 16:25:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75299 00:10:25.639 16:25:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:25.639 16:25:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:25.639 killing process with pid 75299 00:10:25.639 16:25:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75299' 00:10:25.639 16:25:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 75299 00:10:25.639 16:25:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 75299 00:10:25.897 16:25:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:25.897 16:25:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:25.897 16:25:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:25.897 16:25:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:25.897 16:25:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:25.897 16:25:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.897 16:25:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:25.897 16:25:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.897 16:25:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:25.897 00:10:25.897 real 0m13.664s 00:10:25.897 user 0m23.100s 00:10:25.897 sys 0m2.427s 00:10:25.897 16:25:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:25.897 ************************************ 00:10:25.897 END TEST nvmf_queue_depth 00:10:25.897 ************************************ 00:10:25.897 16:25:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:25.897 16:25:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:25.897 16:25:44 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:25.897 16:25:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:25.897 16:25:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.897 16:25:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:25.897 ************************************ 00:10:25.897 START TEST nvmf_target_multipath 00:10:25.897 ************************************ 00:10:25.897 16:25:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:25.897 * Looking for test storage... 00:10:25.897 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:25.897 16:25:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:25.897 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:25.897 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:25.897 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:25.897 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:25.897 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:25.897 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:25.897 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:25.897 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:25.897 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:25.897 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:25.897 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:26.155 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:10:26.155 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:10:26.155 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:26.155 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:26.155 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:26.155 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:26.155 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:26.155 16:25:44 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:26.155 16:25:44 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:26.155 16:25:44 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:26.155 16:25:44 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.155 16:25:44 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.155 16:25:44 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.155 16:25:44 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:26.156 Cannot find device "nvmf_tgt_br" 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:26.156 Cannot find device "nvmf_tgt_br2" 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:26.156 Cannot find device "nvmf_tgt_br" 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:26.156 Cannot find device "nvmf_tgt_br2" 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:26.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:26.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:26.156 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:26.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:26.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:10:26.414 00:10:26.414 --- 10.0.0.2 ping statistics --- 00:10:26.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.414 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:26.414 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:26.414 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:10:26.414 00:10:26.414 --- 10.0.0.3 ping statistics --- 00:10:26.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.414 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:26.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:26.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:10:26.414 00:10:26.414 --- 10.0.0.1 ping statistics --- 00:10:26.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.414 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:26.414 16:25:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:26.415 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=75681 00:10:26.415 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 75681 00:10:26.415 16:25:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:26.415 16:25:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 75681 ']' 00:10:26.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.415 16:25:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.415 16:25:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:26.415 16:25:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.415 16:25:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:26.415 16:25:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:26.415 [2024-07-21 16:25:44.584601] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:10:26.415 [2024-07-21 16:25:44.584681] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.673 [2024-07-21 16:25:44.723631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:26.673 [2024-07-21 16:25:44.806814] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:26.673 [2024-07-21 16:25:44.806860] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:26.673 [2024-07-21 16:25:44.806870] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:26.673 [2024-07-21 16:25:44.806877] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:26.673 [2024-07-21 16:25:44.806882] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:26.673 [2024-07-21 16:25:44.807043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.673 [2024-07-21 16:25:44.807573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:26.673 [2024-07-21 16:25:44.808312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:26.673 [2024-07-21 16:25:44.808340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.607 16:25:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:27.607 16:25:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:10:27.607 16:25:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:27.607 16:25:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:27.607 16:25:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:27.607 16:25:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.607 16:25:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:27.865 [2024-07-21 16:25:45.838687] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:27.865 16:25:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:28.123 Malloc0 00:10:28.123 16:25:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:28.381 16:25:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:28.640 16:25:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.898 [2024-07-21 16:25:46.890314] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.898 16:25:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:29.156 [2024-07-21 16:25:47.134646] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:29.156 16:25:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:10:29.413 16:25:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:29.413 16:25:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:29.413 16:25:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:10:29.413 16:25:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:29.413 16:25:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:29.413 16:25:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:31.936 16:25:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:31.937 16:25:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:10:31.937 16:25:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=75820 00:10:31.937 16:25:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:10:31.937 16:25:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:31.937 [global] 00:10:31.937 thread=1 00:10:31.937 invalidate=1 00:10:31.937 rw=randrw 00:10:31.937 time_based=1 00:10:31.937 runtime=6 00:10:31.937 ioengine=libaio 00:10:31.937 direct=1 00:10:31.937 bs=4096 00:10:31.937 iodepth=128 00:10:31.937 norandommap=0 00:10:31.937 numjobs=1 00:10:31.937 00:10:31.937 verify_dump=1 00:10:31.937 verify_backlog=512 00:10:31.937 verify_state_save=0 00:10:31.937 do_verify=1 00:10:31.937 verify=crc32c-intel 00:10:31.937 [job0] 00:10:31.937 filename=/dev/nvme0n1 00:10:31.937 Could not set queue depth (nvme0n1) 00:10:31.937 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:31.937 fio-3.35 00:10:31.937 Starting 1 thread 00:10:32.500 16:25:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:33.099 16:25:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:33.099 16:25:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:33.099 16:25:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:33.099 16:25:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:33.099 16:25:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:33.099 16:25:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:33.099 16:25:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:33.099 16:25:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:33.099 16:25:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:33.099 16:25:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:33.099 16:25:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:33.099 16:25:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:33.099 16:25:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:33.099 16:25:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:34.470 16:25:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:34.470 16:25:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:34.470 16:25:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:34.470 16:25:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:34.470 16:25:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:34.727 16:25:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:34.728 16:25:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:34.728 16:25:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:34.728 16:25:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:34.728 16:25:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:34.728 16:25:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:34.728 16:25:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:34.728 16:25:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:34.728 16:25:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:34.728 16:25:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:34.728 16:25:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:34.728 16:25:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:34.728 16:25:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:35.657 16:25:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:35.657 16:25:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:35.657 16:25:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:35.657 16:25:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 75820 00:10:38.183 00:10:38.183 job0: (groupid=0, jobs=1): err= 0: pid=75841: Sun Jul 21 16:25:55 2024 00:10:38.183 read: IOPS=11.5k, BW=44.8MiB/s (46.9MB/s)(269MiB/6005msec) 00:10:38.183 slat (usec): min=4, max=6309, avg=49.27, stdev=217.16 00:10:38.183 clat (usec): min=1001, max=13609, avg=7616.65, stdev=1141.12 00:10:38.183 lat (usec): min=1108, max=13622, avg=7665.93, stdev=1150.19 00:10:38.183 clat percentiles (usec): 00:10:38.183 | 1.00th=[ 4686], 5.00th=[ 5997], 10.00th=[ 6521], 20.00th=[ 6849], 00:10:38.183 | 30.00th=[ 7046], 40.00th=[ 7242], 50.00th=[ 7504], 60.00th=[ 7767], 00:10:38.183 | 70.00th=[ 8094], 80.00th=[ 8455], 90.00th=[ 8979], 95.00th=[ 9503], 00:10:38.183 | 99.00th=[11207], 99.50th=[11600], 99.90th=[12387], 99.95th=[12780], 00:10:38.183 | 99.99th=[13042] 00:10:38.183 bw ( KiB/s): min= 5128, max=28888, per=51.76%, avg=23730.27, stdev=6734.01, samples=11 00:10:38.183 iops : min= 1282, max= 7222, avg=5932.55, stdev=1683.50, samples=11 00:10:38.183 write: IOPS=6713, BW=26.2MiB/s (27.5MB/s)(141MiB/5372msec); 0 zone resets 00:10:38.183 slat (usec): min=14, max=3988, avg=62.16, stdev=155.86 00:10:38.183 clat (usec): min=988, max=12919, avg=6584.30, stdev=974.06 00:10:38.183 lat (usec): min=1037, max=12969, avg=6646.46, stdev=976.52 00:10:38.183 clat percentiles (usec): 00:10:38.183 | 1.00th=[ 3589], 5.00th=[ 4752], 10.00th=[ 5538], 20.00th=[ 5997], 00:10:38.183 | 30.00th=[ 6259], 40.00th=[ 6456], 50.00th=[ 6652], 60.00th=[ 6849], 00:10:38.183 | 70.00th=[ 6980], 80.00th=[ 7242], 90.00th=[ 7570], 95.00th=[ 7898], 00:10:38.183 | 99.00th=[ 9241], 99.50th=[10159], 99.90th=[11338], 99.95th=[11731], 00:10:38.183 | 99.99th=[12518] 00:10:38.183 bw ( KiB/s): min= 5416, max=29200, per=88.41%, avg=23742.36, stdev=6556.52, samples=11 00:10:38.183 iops : min= 1354, max= 7300, avg=5935.55, stdev=1639.12, samples=11 00:10:38.183 lat (usec) : 1000=0.01% 00:10:38.183 lat (msec) : 2=0.02%, 4=0.81%, 10=96.83%, 20=2.34% 00:10:38.183 cpu : usr=6.49%, sys=23.65%, ctx=6634, majf=0, minf=133 00:10:38.183 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:38.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:38.183 issued rwts: total=68832,36067,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.183 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:38.183 00:10:38.183 Run status group 0 (all jobs): 00:10:38.183 READ: bw=44.8MiB/s (46.9MB/s), 44.8MiB/s-44.8MiB/s (46.9MB/s-46.9MB/s), io=269MiB (282MB), run=6005-6005msec 00:10:38.183 WRITE: bw=26.2MiB/s (27.5MB/s), 26.2MiB/s-26.2MiB/s (27.5MB/s-27.5MB/s), io=141MiB (148MB), run=5372-5372msec 00:10:38.183 00:10:38.183 Disk stats (read/write): 00:10:38.183 nvme0n1: ios=67784/35340, merge=0/0, ticks=482424/216566, in_queue=698990, util=98.62% 00:10:38.183 16:25:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:10:38.183 16:25:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:38.441 16:25:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:38.441 16:25:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:38.441 16:25:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:38.441 16:25:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:38.441 16:25:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:38.441 16:25:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:38.441 16:25:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:38.441 16:25:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:38.441 16:25:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:38.441 16:25:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:38.441 16:25:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:38.441 16:25:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:10:38.441 16:25:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:39.374 16:25:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:39.374 16:25:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:39.374 16:25:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:39.374 16:25:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:10:39.374 16:25:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=75974 00:10:39.374 16:25:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:39.374 16:25:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:10:39.374 [global] 00:10:39.374 thread=1 00:10:39.374 invalidate=1 00:10:39.374 rw=randrw 00:10:39.374 time_based=1 00:10:39.374 runtime=6 00:10:39.374 ioengine=libaio 00:10:39.374 direct=1 00:10:39.374 bs=4096 00:10:39.374 iodepth=128 00:10:39.374 norandommap=0 00:10:39.374 numjobs=1 00:10:39.374 00:10:39.374 verify_dump=1 00:10:39.374 verify_backlog=512 00:10:39.374 verify_state_save=0 00:10:39.374 do_verify=1 00:10:39.374 verify=crc32c-intel 00:10:39.374 [job0] 00:10:39.374 filename=/dev/nvme0n1 00:10:39.374 Could not set queue depth (nvme0n1) 00:10:39.631 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.631 fio-3.35 00:10:39.631 Starting 1 thread 00:10:40.559 16:25:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:40.559 16:25:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:40.817 16:25:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:40.817 16:25:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:40.817 16:25:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:40.817 16:25:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:40.817 16:25:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:40.817 16:25:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:40.817 16:25:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:40.817 16:25:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:40.817 16:25:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:40.817 16:25:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:40.817 16:25:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:40.817 16:25:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:40.817 16:25:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:42.189 16:25:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:42.189 16:25:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:42.189 16:26:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:42.189 16:26:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:42.189 16:26:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:42.447 16:26:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:42.447 16:26:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:42.447 16:26:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:42.447 16:26:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:42.447 16:26:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:42.447 16:26:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:42.447 16:26:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:42.447 16:26:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:42.447 16:26:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:42.447 16:26:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:42.447 16:26:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:42.447 16:26:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:42.447 16:26:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:43.380 16:26:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:43.380 16:26:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:43.380 16:26:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:43.380 16:26:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 75974 00:10:45.924 00:10:45.924 job0: (groupid=0, jobs=1): err= 0: pid=75995: Sun Jul 21 16:26:03 2024 00:10:45.924 read: IOPS=11.0k, BW=43.1MiB/s (45.2MB/s)(259MiB/6007msec) 00:10:45.924 slat (usec): min=2, max=5430, avg=45.48, stdev=218.33 00:10:45.924 clat (usec): min=315, max=21436, avg=7920.78, stdev=2537.11 00:10:45.924 lat (usec): min=339, max=21447, avg=7966.25, stdev=2542.92 00:10:45.924 clat percentiles (usec): 00:10:45.924 | 1.00th=[ 1844], 5.00th=[ 2999], 10.00th=[ 5080], 20.00th=[ 6521], 00:10:45.924 | 30.00th=[ 6980], 40.00th=[ 7504], 50.00th=[ 7832], 60.00th=[ 8225], 00:10:45.924 | 70.00th=[ 8717], 80.00th=[ 9241], 90.00th=[10683], 95.00th=[12780], 00:10:45.924 | 99.00th=[15401], 99.50th=[16188], 99.90th=[18744], 99.95th=[19530], 00:10:45.924 | 99.99th=[20841] 00:10:45.925 bw ( KiB/s): min=14904, max=32656, per=53.33%, avg=23520.91, stdev=5139.60, samples=11 00:10:45.925 iops : min= 3726, max= 8164, avg=5880.18, stdev=1284.95, samples=11 00:10:45.925 write: IOPS=6313, BW=24.7MiB/s (25.9MB/s)(137MiB/5543msec); 0 zone resets 00:10:45.925 slat (usec): min=2, max=2095, avg=56.43, stdev=150.38 00:10:45.925 clat (usec): min=403, max=18002, avg=6797.79, stdev=2498.57 00:10:45.925 lat (usec): min=431, max=18027, avg=6854.22, stdev=2502.30 00:10:45.925 clat percentiles (usec): 00:10:45.925 | 1.00th=[ 1188], 5.00th=[ 1860], 10.00th=[ 2999], 20.00th=[ 5538], 00:10:45.925 | 30.00th=[ 6259], 40.00th=[ 6652], 50.00th=[ 6915], 60.00th=[ 7177], 00:10:45.925 | 70.00th=[ 7504], 80.00th=[ 7963], 90.00th=[ 9896], 95.00th=[11600], 00:10:45.925 | 99.00th=[13304], 99.50th=[13829], 99.90th=[15795], 99.95th=[16450], 00:10:45.925 | 99.99th=[17695] 00:10:45.925 bw ( KiB/s): min=15136, max=31952, per=93.31%, avg=23563.64, stdev=4791.05, samples=11 00:10:45.925 iops : min= 3784, max= 7988, avg=5890.91, stdev=1197.76, samples=11 00:10:45.925 lat (usec) : 500=0.02%, 750=0.10%, 1000=0.16% 00:10:45.925 lat (msec) : 2=2.79%, 4=6.16%, 10=78.65%, 20=12.09%, 50=0.01% 00:10:45.925 cpu : usr=5.39%, sys=22.59%, ctx=7162, majf=0, minf=108 00:10:45.925 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:45.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:45.925 issued rwts: total=66236,34995,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.925 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:45.925 00:10:45.925 Run status group 0 (all jobs): 00:10:45.925 READ: bw=43.1MiB/s (45.2MB/s), 43.1MiB/s-43.1MiB/s (45.2MB/s-45.2MB/s), io=259MiB (271MB), run=6007-6007msec 00:10:45.925 WRITE: bw=24.7MiB/s (25.9MB/s), 24.7MiB/s-24.7MiB/s (25.9MB/s-25.9MB/s), io=137MiB (143MB), run=5543-5543msec 00:10:45.925 00:10:45.925 Disk stats (read/write): 00:10:45.925 nvme0n1: ios=65360/34382, merge=0/0, ticks=483894/218495, in_queue=702389, util=98.66% 00:10:45.925 16:26:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:45.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:45.925 16:26:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:45.925 16:26:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:10:45.925 16:26:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:45.925 16:26:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.925 16:26:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.925 16:26:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:45.925 16:26:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:10:45.925 16:26:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:46.183 16:26:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:46.183 16:26:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:46.183 16:26:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:46.183 16:26:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:46.183 16:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:46.183 16:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:46.183 16:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:46.183 16:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:46.183 16:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:46.183 16:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:46.183 rmmod nvme_tcp 00:10:46.183 rmmod nvme_fabrics 00:10:46.183 rmmod nvme_keyring 00:10:46.183 16:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:46.183 16:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:46.183 16:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:46.183 16:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 75681 ']' 00:10:46.183 16:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 75681 00:10:46.183 16:26:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 75681 ']' 00:10:46.183 16:26:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 75681 00:10:46.183 16:26:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:10:46.183 16:26:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:46.183 16:26:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75681 00:10:46.183 16:26:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:46.183 killing process with pid 75681 00:10:46.183 16:26:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:46.183 16:26:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75681' 00:10:46.183 16:26:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 75681 00:10:46.183 16:26:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 75681 00:10:46.441 16:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:46.441 16:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:46.441 16:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:46.441 16:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:46.441 16:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:46.441 16:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.441 16:26:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:46.441 16:26:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.700 16:26:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:46.700 00:10:46.700 real 0m20.661s 00:10:46.700 user 1m20.634s 00:10:46.700 sys 0m6.601s 00:10:46.700 16:26:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:46.700 ************************************ 00:10:46.700 16:26:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:46.700 END TEST nvmf_target_multipath 00:10:46.700 ************************************ 00:10:46.700 16:26:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:46.700 16:26:04 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:46.700 16:26:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:46.700 16:26:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.700 16:26:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:46.700 ************************************ 00:10:46.700 START TEST nvmf_zcopy 00:10:46.700 ************************************ 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:46.700 * Looking for test storage... 00:10:46.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:46.700 Cannot find device "nvmf_tgt_br" 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:46.700 Cannot find device "nvmf_tgt_br2" 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:46.700 Cannot find device "nvmf_tgt_br" 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:10:46.700 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:46.957 Cannot find device "nvmf_tgt_br2" 00:10:46.957 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:10:46.957 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:46.957 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:46.957 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:46.957 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:46.957 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:46.957 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:46.957 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:46.957 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:46.957 16:26:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:46.957 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:46.957 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:46.957 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:46.957 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:46.957 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:46.957 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:46.957 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:46.957 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:46.957 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:46.957 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:46.957 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:46.957 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:46.957 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:46.957 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:46.957 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:46.957 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:46.957 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:46.957 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:46.957 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:46.957 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:47.214 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:47.214 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:47.214 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:47.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:47.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:10:47.214 00:10:47.214 --- 10.0.0.2 ping statistics --- 00:10:47.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.214 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:10:47.214 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:47.214 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:47.214 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:10:47.214 00:10:47.214 --- 10.0.0.3 ping statistics --- 00:10:47.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.214 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:10:47.214 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:47.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:47.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:10:47.214 00:10:47.214 --- 10.0.0.1 ping statistics --- 00:10:47.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.214 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:10:47.214 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:47.214 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:10:47.214 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:47.214 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:47.214 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:47.214 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:47.214 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:47.214 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:47.214 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:47.214 16:26:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:47.214 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:47.214 16:26:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:47.214 16:26:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:47.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.214 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=76272 00:10:47.214 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 76272 00:10:47.214 16:26:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 76272 ']' 00:10:47.214 16:26:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.214 16:26:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:47.214 16:26:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.214 16:26:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:47.214 16:26:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:47.214 16:26:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:47.214 [2024-07-21 16:26:05.279853] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:10:47.214 [2024-07-21 16:26:05.279944] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.472 [2024-07-21 16:26:05.421960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.472 [2024-07-21 16:26:05.538104] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.472 [2024-07-21 16:26:05.538508] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.472 [2024-07-21 16:26:05.538612] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:47.472 [2024-07-21 16:26:05.538718] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:47.472 [2024-07-21 16:26:05.538800] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.472 [2024-07-21 16:26:05.538907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:48.404 [2024-07-21 16:26:06.315253] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:48.404 [2024-07-21 16:26:06.331429] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:48.404 malloc0 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:48.404 16:26:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:48.404 { 00:10:48.404 "params": { 00:10:48.404 "name": "Nvme$subsystem", 00:10:48.404 "trtype": "$TEST_TRANSPORT", 00:10:48.404 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:48.404 "adrfam": "ipv4", 00:10:48.404 "trsvcid": "$NVMF_PORT", 00:10:48.404 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:48.404 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:48.404 "hdgst": ${hdgst:-false}, 00:10:48.404 "ddgst": ${ddgst:-false} 00:10:48.404 }, 00:10:48.404 "method": "bdev_nvme_attach_controller" 00:10:48.404 } 00:10:48.404 EOF 00:10:48.404 )") 00:10:48.405 16:26:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:48.405 16:26:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:48.405 16:26:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:48.405 16:26:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:48.405 "params": { 00:10:48.405 "name": "Nvme1", 00:10:48.405 "trtype": "tcp", 00:10:48.405 "traddr": "10.0.0.2", 00:10:48.405 "adrfam": "ipv4", 00:10:48.405 "trsvcid": "4420", 00:10:48.405 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:48.405 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:48.405 "hdgst": false, 00:10:48.405 "ddgst": false 00:10:48.405 }, 00:10:48.405 "method": "bdev_nvme_attach_controller" 00:10:48.405 }' 00:10:48.405 [2024-07-21 16:26:06.435152] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:10:48.405 [2024-07-21 16:26:06.435242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76329 ] 00:10:48.405 [2024-07-21 16:26:06.573401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.663 [2024-07-21 16:26:06.690832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.921 Running I/O for 10 seconds... 00:10:58.924 00:10:58.924 Latency(us) 00:10:58.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:58.925 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:58.925 Verification LBA range: start 0x0 length 0x1000 00:10:58.925 Nvme1n1 : 10.01 7211.25 56.34 0.00 0.00 17690.73 848.99 29550.78 00:10:58.925 =================================================================================================================== 00:10:58.925 Total : 7211.25 56.34 0.00 0.00 17690.73 848.99 29550.78 00:10:58.925 16:26:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=76447 00:10:58.925 16:26:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:58.925 16:26:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:58.925 16:26:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:58.925 16:26:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:58.925 16:26:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:59.184 16:26:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:59.184 16:26:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:59.184 16:26:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:59.184 { 00:10:59.184 "params": { 00:10:59.184 "name": "Nvme$subsystem", 00:10:59.184 "trtype": "$TEST_TRANSPORT", 00:10:59.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:59.184 "adrfam": "ipv4", 00:10:59.184 "trsvcid": "$NVMF_PORT", 00:10:59.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:59.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:59.184 "hdgst": ${hdgst:-false}, 00:10:59.184 "ddgst": ${ddgst:-false} 00:10:59.184 }, 00:10:59.184 "method": "bdev_nvme_attach_controller" 00:10:59.184 } 00:10:59.184 EOF 00:10:59.184 )") 00:10:59.184 16:26:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:59.184 [2024-07-21 16:26:17.117238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.184 [2024-07-21 16:26:17.117295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.184 16:26:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:59.184 16:26:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:59.185 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.185 16:26:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:59.185 "params": { 00:10:59.185 "name": "Nvme1", 00:10:59.185 "trtype": "tcp", 00:10:59.185 "traddr": "10.0.0.2", 00:10:59.185 "adrfam": "ipv4", 00:10:59.185 "trsvcid": "4420", 00:10:59.185 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:59.185 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:59.185 "hdgst": false, 00:10:59.185 "ddgst": false 00:10:59.185 }, 00:10:59.185 "method": "bdev_nvme_attach_controller" 00:10:59.185 }' 00:10:59.185 [2024-07-21 16:26:17.129191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.185 [2024-07-21 16:26:17.129220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.185 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.185 [2024-07-21 16:26:17.137186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.185 [2024-07-21 16:26:17.137212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.185 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.185 [2024-07-21 16:26:17.145187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.185 [2024-07-21 16:26:17.145213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.185 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.185 [2024-07-21 16:26:17.153189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.185 [2024-07-21 16:26:17.153558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.185 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.185 [2024-07-21 16:26:17.161285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.185 [2024-07-21 16:26:17.161502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.185 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.185 [2024-07-21 16:26:17.169234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.185 [2024-07-21 16:26:17.169472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.185 [2024-07-21 16:26:17.169747] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:10:59.185 [2024-07-21 16:26:17.170168] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76447 ] 00:10:59.185 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.185 [2024-07-21 16:26:17.177256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.185 [2024-07-21 16:26:17.177322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.185 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.185 [2024-07-21 16:26:17.185237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.185 [2024-07-21 16:26:17.185297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.185 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.185 [2024-07-21 16:26:17.193235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.185 [2024-07-21 16:26:17.193280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.185 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.185 [2024-07-21 16:26:17.201238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.185 [2024-07-21 16:26:17.201513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.185 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.185 [2024-07-21 16:26:17.209249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.185 [2024-07-21 16:26:17.209451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.185 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.185 [2024-07-21 16:26:17.217246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.185 [2024-07-21 16:26:17.217427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.185 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.185 [2024-07-21 16:26:17.225254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.185 [2024-07-21 16:26:17.225433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.185 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.185 [2024-07-21 16:26:17.233254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.185 [2024-07-21 16:26:17.233442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.185 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.185 [2024-07-21 16:26:17.241255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.185 [2024-07-21 16:26:17.241457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.185 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.185 [2024-07-21 16:26:17.249258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.185 [2024-07-21 16:26:17.249312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.185 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.185 [2024-07-21 16:26:17.257222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.185 [2024-07-21 16:26:17.257247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.185 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.185 [2024-07-21 16:26:17.265222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.185 [2024-07-21 16:26:17.265390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.185 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.185 [2024-07-21 16:26:17.273228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.185 [2024-07-21 16:26:17.273407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.185 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.185 [2024-07-21 16:26:17.281232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.185 [2024-07-21 16:26:17.281405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.185 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.185 [2024-07-21 16:26:17.289231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.185 [2024-07-21 16:26:17.289375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.185 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.185 [2024-07-21 16:26:17.297228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.185 [2024-07-21 16:26:17.297376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.185 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.185 [2024-07-21 16:26:17.305232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.185 [2024-07-21 16:26:17.305377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.185 [2024-07-21 16:26:17.308712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.185 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.185 [2024-07-21 16:26:17.313240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.185 [2024-07-21 16:26:17.313276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.185 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.185 [2024-07-21 16:26:17.325235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.185 [2024-07-21 16:26:17.325283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.186 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.186 [2024-07-21 16:26:17.337238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.186 [2024-07-21 16:26:17.337271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.186 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.186 [2024-07-21 16:26:17.349236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.186 [2024-07-21 16:26:17.349416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.186 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.186 [2024-07-21 16:26:17.357239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.186 [2024-07-21 16:26:17.357401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.186 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.186 [2024-07-21 16:26:17.365236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.186 [2024-07-21 16:26:17.365392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.186 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.186 [2024-07-21 16:26:17.373238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.186 [2024-07-21 16:26:17.373412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.186 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.186 [2024-07-21 16:26:17.381244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.186 [2024-07-21 16:26:17.381407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.186 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.186 [2024-07-21 16:26:17.389245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.186 [2024-07-21 16:26:17.389408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.445 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.445 [2024-07-21 16:26:17.397246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.445 [2024-07-21 16:26:17.397394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.445 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.445 [2024-07-21 16:26:17.402911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.445 [2024-07-21 16:26:17.405248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.445 [2024-07-21 16:26:17.405278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.445 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.445 [2024-07-21 16:26:17.413248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.445 [2024-07-21 16:26:17.413280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.445 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.445 [2024-07-21 16:26:17.421249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.445 [2024-07-21 16:26:17.421282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.445 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.445 [2024-07-21 16:26:17.429251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.445 [2024-07-21 16:26:17.429280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.445 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.445 [2024-07-21 16:26:17.437252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.445 [2024-07-21 16:26:17.437297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.446 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.446 [2024-07-21 16:26:17.445256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.446 [2024-07-21 16:26:17.445287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.446 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.446 [2024-07-21 16:26:17.453259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.446 [2024-07-21 16:26:17.453290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.446 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.446 [2024-07-21 16:26:17.461259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.446 [2024-07-21 16:26:17.461288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.446 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.446 [2024-07-21 16:26:17.469291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.446 [2024-07-21 16:26:17.469312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.446 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.446 [2024-07-21 16:26:17.477275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.446 [2024-07-21 16:26:17.477298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.446 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.446 [2024-07-21 16:26:17.485274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.446 [2024-07-21 16:26:17.485294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.446 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.446 [2024-07-21 16:26:17.497280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.446 [2024-07-21 16:26:17.497302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.446 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.446 [2024-07-21 16:26:17.505281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.446 [2024-07-21 16:26:17.505302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.446 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.446 [2024-07-21 16:26:17.513280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.446 [2024-07-21 16:26:17.513302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.446 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.446 [2024-07-21 16:26:17.521334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.446 [2024-07-21 16:26:17.521364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.446 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.446 [2024-07-21 16:26:17.529354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.446 [2024-07-21 16:26:17.529381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.446 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.446 [2024-07-21 16:26:17.537340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.446 [2024-07-21 16:26:17.537365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.446 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.446 [2024-07-21 16:26:17.545356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.446 [2024-07-21 16:26:17.545382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.446 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.446 [2024-07-21 16:26:17.553390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.446 [2024-07-21 16:26:17.553418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.446 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.446 [2024-07-21 16:26:17.561371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.446 [2024-07-21 16:26:17.561397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.446 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.446 [2024-07-21 16:26:17.569365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.446 [2024-07-21 16:26:17.569389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.446 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.446 [2024-07-21 16:26:17.581386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.446 [2024-07-21 16:26:17.581429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.446 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.446 Running I/O for 5 seconds... 00:10:59.446 [2024-07-21 16:26:17.589367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.446 [2024-07-21 16:26:17.589391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.446 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.446 [2024-07-21 16:26:17.600846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.446 [2024-07-21 16:26:17.600876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.446 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.446 [2024-07-21 16:26:17.608948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.446 [2024-07-21 16:26:17.608989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.447 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.447 [2024-07-21 16:26:17.620028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.447 [2024-07-21 16:26:17.620057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.447 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.447 [2024-07-21 16:26:17.629048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.447 [2024-07-21 16:26:17.629077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.447 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.447 [2024-07-21 16:26:17.645641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.447 [2024-07-21 16:26:17.645671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.447 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.706 [2024-07-21 16:26:17.656038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.706 [2024-07-21 16:26:17.656084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.706 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.706 [2024-07-21 16:26:17.663861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.706 [2024-07-21 16:26:17.663890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.706 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.706 [2024-07-21 16:26:17.674372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.706 [2024-07-21 16:26:17.674401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.706 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.706 [2024-07-21 16:26:17.688892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.706 [2024-07-21 16:26:17.688934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.706 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.706 [2024-07-21 16:26:17.700391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.706 [2024-07-21 16:26:17.700419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.706 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.706 [2024-07-21 16:26:17.709773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.706 [2024-07-21 16:26:17.709801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.706 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.706 [2024-07-21 16:26:17.717473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.706 [2024-07-21 16:26:17.717502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.706 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.706 [2024-07-21 16:26:17.729206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.706 [2024-07-21 16:26:17.729248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.706 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.706 [2024-07-21 16:26:17.740471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.706 [2024-07-21 16:26:17.740500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.706 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.706 [2024-07-21 16:26:17.749116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.706 [2024-07-21 16:26:17.749147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.706 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.706 [2024-07-21 16:26:17.758163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.706 [2024-07-21 16:26:17.758193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.706 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.706 [2024-07-21 16:26:17.767267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.706 [2024-07-21 16:26:17.767308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.706 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.706 [2024-07-21 16:26:17.776062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.706 [2024-07-21 16:26:17.776108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.706 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.706 [2024-07-21 16:26:17.785456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.706 [2024-07-21 16:26:17.785484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.706 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.706 [2024-07-21 16:26:17.794381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.706 [2024-07-21 16:26:17.794410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.706 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.706 [2024-07-21 16:26:17.803869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.706 [2024-07-21 16:26:17.803897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.706 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.706 [2024-07-21 16:26:17.812988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.706 [2024-07-21 16:26:17.813017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.706 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.706 [2024-07-21 16:26:17.822567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.706 [2024-07-21 16:26:17.822600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.707 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.707 [2024-07-21 16:26:17.831868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.707 [2024-07-21 16:26:17.831897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.707 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.707 [2024-07-21 16:26:17.844810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.707 [2024-07-21 16:26:17.844853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.707 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.707 [2024-07-21 16:26:17.862269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.707 [2024-07-21 16:26:17.862313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.707 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.707 [2024-07-21 16:26:17.875861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.707 [2024-07-21 16:26:17.875890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.707 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.707 [2024-07-21 16:26:17.884559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.707 [2024-07-21 16:26:17.884587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.707 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.707 [2024-07-21 16:26:17.899530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.707 [2024-07-21 16:26:17.899574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.707 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.966 [2024-07-21 16:26:17.917161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.966 [2024-07-21 16:26:17.917191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.966 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.966 [2024-07-21 16:26:17.928326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.966 [2024-07-21 16:26:17.928357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.966 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.966 [2024-07-21 16:26:17.943867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.966 [2024-07-21 16:26:17.943896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.966 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.966 [2024-07-21 16:26:17.960561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.966 [2024-07-21 16:26:17.960590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.966 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.966 [2024-07-21 16:26:17.972006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.966 [2024-07-21 16:26:17.972037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.966 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.966 [2024-07-21 16:26:17.981165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.966 [2024-07-21 16:26:17.981195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.966 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.966 [2024-07-21 16:26:17.990615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.966 [2024-07-21 16:26:17.990644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.966 2024/07/21 16:26:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.966 [2024-07-21 16:26:17.999984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.966 [2024-07-21 16:26:18.000013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.966 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.966 [2024-07-21 16:26:18.009342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.966 [2024-07-21 16:26:18.009372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.966 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.966 [2024-07-21 16:26:18.018601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.966 [2024-07-21 16:26:18.018630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.966 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.966 [2024-07-21 16:26:18.028454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.966 [2024-07-21 16:26:18.028481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.966 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.966 [2024-07-21 16:26:18.037568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.966 [2024-07-21 16:26:18.037598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.966 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.966 [2024-07-21 16:26:18.046710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.966 [2024-07-21 16:26:18.046739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.966 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.966 [2024-07-21 16:26:18.055847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.966 [2024-07-21 16:26:18.055877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.966 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.966 [2024-07-21 16:26:18.065478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.966 [2024-07-21 16:26:18.065506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.966 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.966 [2024-07-21 16:26:18.074948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.966 [2024-07-21 16:26:18.074982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.966 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.967 [2024-07-21 16:26:18.084201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.967 [2024-07-21 16:26:18.084231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.967 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.967 [2024-07-21 16:26:18.093684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.967 [2024-07-21 16:26:18.093713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.967 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.967 [2024-07-21 16:26:18.103180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.967 [2024-07-21 16:26:18.103209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.967 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.967 [2024-07-21 16:26:18.112312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.967 [2024-07-21 16:26:18.112344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.967 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.967 [2024-07-21 16:26:18.121588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.967 [2024-07-21 16:26:18.121625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.967 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.967 [2024-07-21 16:26:18.131000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.967 [2024-07-21 16:26:18.131029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.967 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.967 [2024-07-21 16:26:18.140241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.967 [2024-07-21 16:26:18.140280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.967 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.967 [2024-07-21 16:26:18.149735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.967 [2024-07-21 16:26:18.149764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.967 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.967 [2024-07-21 16:26:18.159007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.967 [2024-07-21 16:26:18.159043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.967 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.967 [2024-07-21 16:26:18.167915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.967 [2024-07-21 16:26:18.167944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.967 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.226 [2024-07-21 16:26:18.182590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.226 [2024-07-21 16:26:18.182619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.226 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.226 [2024-07-21 16:26:18.193570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.226 [2024-07-21 16:26:18.193599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.226 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.226 [2024-07-21 16:26:18.209286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.226 [2024-07-21 16:26:18.209313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.226 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.226 [2024-07-21 16:26:18.225756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.226 [2024-07-21 16:26:18.225785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.226 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.226 [2024-07-21 16:26:18.242665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.226 [2024-07-21 16:26:18.242695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.226 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.226 [2024-07-21 16:26:18.253318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.226 [2024-07-21 16:26:18.253347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.226 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.226 [2024-07-21 16:26:18.270207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.226 [2024-07-21 16:26:18.270291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.226 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.226 [2024-07-21 16:26:18.280856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.226 [2024-07-21 16:26:18.280887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.227 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.227 [2024-07-21 16:26:18.289523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.227 [2024-07-21 16:26:18.289553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.227 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.227 [2024-07-21 16:26:18.298689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.227 [2024-07-21 16:26:18.298718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.227 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.227 [2024-07-21 16:26:18.308070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.227 [2024-07-21 16:26:18.308100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.227 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.227 [2024-07-21 16:26:18.317497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.227 [2024-07-21 16:26:18.317526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.227 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.227 [2024-07-21 16:26:18.326799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.227 [2024-07-21 16:26:18.326841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.227 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.227 [2024-07-21 16:26:18.336353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.227 [2024-07-21 16:26:18.336380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.227 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.227 [2024-07-21 16:26:18.345612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.227 [2024-07-21 16:26:18.345640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.227 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.227 [2024-07-21 16:26:18.355534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.227 [2024-07-21 16:26:18.355561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.227 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.227 [2024-07-21 16:26:18.365513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.227 [2024-07-21 16:26:18.365553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.227 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.227 [2024-07-21 16:26:18.374824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.227 [2024-07-21 16:26:18.374866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.227 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.227 [2024-07-21 16:26:18.383957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.227 [2024-07-21 16:26:18.384000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.227 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.227 [2024-07-21 16:26:18.393041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.227 [2024-07-21 16:26:18.393070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.227 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.227 [2024-07-21 16:26:18.402188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.227 [2024-07-21 16:26:18.402238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.227 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.227 [2024-07-21 16:26:18.411414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.227 [2024-07-21 16:26:18.411457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.227 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.227 [2024-07-21 16:26:18.420377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.227 [2024-07-21 16:26:18.420406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.227 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.227 [2024-07-21 16:26:18.429443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.227 [2024-07-21 16:26:18.429484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.227 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.487 [2024-07-21 16:26:18.442443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.487 [2024-07-21 16:26:18.442471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.487 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.487 [2024-07-21 16:26:18.451146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.487 [2024-07-21 16:26:18.451175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.487 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.487 [2024-07-21 16:26:18.461626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.487 [2024-07-21 16:26:18.461655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.487 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.487 [2024-07-21 16:26:18.470024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.487 [2024-07-21 16:26:18.470053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.487 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.487 [2024-07-21 16:26:18.481146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.487 [2024-07-21 16:26:18.481176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.487 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.487 [2024-07-21 16:26:18.489524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.487 [2024-07-21 16:26:18.489555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.487 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.487 [2024-07-21 16:26:18.499837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.487 [2024-07-21 16:26:18.499867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.487 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.487 [2024-07-21 16:26:18.507796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.487 [2024-07-21 16:26:18.507825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.487 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.487 [2024-07-21 16:26:18.519146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.487 [2024-07-21 16:26:18.519188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.487 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.487 [2024-07-21 16:26:18.535581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.487 [2024-07-21 16:26:18.535622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.487 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.487 [2024-07-21 16:26:18.547141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.487 [2024-07-21 16:26:18.547169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.487 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.487 [2024-07-21 16:26:18.562954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.487 [2024-07-21 16:26:18.562992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.487 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.487 [2024-07-21 16:26:18.573505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.487 [2024-07-21 16:26:18.573534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.488 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.488 [2024-07-21 16:26:18.589798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.488 [2024-07-21 16:26:18.589840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.488 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.488 [2024-07-21 16:26:18.606173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.488 [2024-07-21 16:26:18.606202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.488 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.488 [2024-07-21 16:26:18.622960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.488 [2024-07-21 16:26:18.622989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.488 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.488 [2024-07-21 16:26:18.633883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.488 [2024-07-21 16:26:18.633933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.488 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.488 [2024-07-21 16:26:18.651247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.488 [2024-07-21 16:26:18.651311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.488 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.488 [2024-07-21 16:26:18.661913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.488 [2024-07-21 16:26:18.661942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.488 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.488 [2024-07-21 16:26:18.677132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.488 [2024-07-21 16:26:18.677173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.488 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.747 [2024-07-21 16:26:18.694105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.747 [2024-07-21 16:26:18.694134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.747 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.747 [2024-07-21 16:26:18.710425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.747 [2024-07-21 16:26:18.710453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.747 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.747 [2024-07-21 16:26:18.721961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.748 [2024-07-21 16:26:18.721990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.748 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.748 [2024-07-21 16:26:18.737112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.748 [2024-07-21 16:26:18.737156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.748 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.748 [2024-07-21 16:26:18.746031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.748 [2024-07-21 16:26:18.746061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.748 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.748 [2024-07-21 16:26:18.762820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.748 [2024-07-21 16:26:18.762861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.748 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.748 [2024-07-21 16:26:18.779858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.748 [2024-07-21 16:26:18.779886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.748 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.748 [2024-07-21 16:26:18.796480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.748 [2024-07-21 16:26:18.796522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.748 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.748 [2024-07-21 16:26:18.812651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.748 [2024-07-21 16:26:18.812681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.748 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.748 [2024-07-21 16:26:18.823753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.748 [2024-07-21 16:26:18.823782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.748 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.748 [2024-07-21 16:26:18.838809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.748 [2024-07-21 16:26:18.838851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.748 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.748 [2024-07-21 16:26:18.849300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.748 [2024-07-21 16:26:18.849331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.748 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.748 [2024-07-21 16:26:18.865578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.748 [2024-07-21 16:26:18.865606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.748 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.748 [2024-07-21 16:26:18.876385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.748 [2024-07-21 16:26:18.876415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.748 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.748 [2024-07-21 16:26:18.892063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.748 [2024-07-21 16:26:18.892095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.748 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.748 [2024-07-21 16:26:18.908960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.748 [2024-07-21 16:26:18.909003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.748 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.748 [2024-07-21 16:26:18.919912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.748 [2024-07-21 16:26:18.919940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.748 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.748 [2024-07-21 16:26:18.935733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.748 [2024-07-21 16:26:18.935774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.748 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:00.748 [2024-07-21 16:26:18.951772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.748 [2024-07-21 16:26:18.951801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.009 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.009 [2024-07-21 16:26:18.968226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.009 [2024-07-21 16:26:18.968275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.009 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.009 [2024-07-21 16:26:18.985157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.009 [2024-07-21 16:26:18.985186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.009 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.009 [2024-07-21 16:26:18.996376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.009 [2024-07-21 16:26:18.996404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.009 2024/07/21 16:26:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.009 [2024-07-21 16:26:19.012735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.009 [2024-07-21 16:26:19.012778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.009 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.009 [2024-07-21 16:26:19.028971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.009 [2024-07-21 16:26:19.029013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.009 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.009 [2024-07-21 16:26:19.045612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.009 [2024-07-21 16:26:19.045641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.009 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.009 [2024-07-21 16:26:19.057314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.009 [2024-07-21 16:26:19.057353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.009 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.009 [2024-07-21 16:26:19.065430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.009 [2024-07-21 16:26:19.065458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.009 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.009 [2024-07-21 16:26:19.076100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.009 [2024-07-21 16:26:19.076143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.009 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.009 [2024-07-21 16:26:19.084287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.009 [2024-07-21 16:26:19.084316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.009 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.009 [2024-07-21 16:26:19.094391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.009 [2024-07-21 16:26:19.094440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.009 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.009 [2024-07-21 16:26:19.102641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.009 [2024-07-21 16:26:19.102669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.009 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.009 [2024-07-21 16:26:19.113299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.009 [2024-07-21 16:26:19.113327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.009 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.009 [2024-07-21 16:26:19.121547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.009 [2024-07-21 16:26:19.121576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.009 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.009 [2024-07-21 16:26:19.133209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.009 [2024-07-21 16:26:19.133239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.009 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.009 [2024-07-21 16:26:19.141929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.009 [2024-07-21 16:26:19.141961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.009 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.009 [2024-07-21 16:26:19.153328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.009 [2024-07-21 16:26:19.153358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.009 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.009 [2024-07-21 16:26:19.161802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.009 [2024-07-21 16:26:19.161831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.009 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.009 [2024-07-21 16:26:19.172325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.009 [2024-07-21 16:26:19.172356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.009 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.009 [2024-07-21 16:26:19.184189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.009 [2024-07-21 16:26:19.184218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.009 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.009 [2024-07-21 16:26:19.192837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.009 [2024-07-21 16:26:19.192867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.010 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.010 [2024-07-21 16:26:19.202000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.010 [2024-07-21 16:26:19.202029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.010 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.010 [2024-07-21 16:26:19.211198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.010 [2024-07-21 16:26:19.211232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.010 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.270 [2024-07-21 16:26:19.224545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.270 [2024-07-21 16:26:19.224574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.270 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.270 [2024-07-21 16:26:19.240389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.270 [2024-07-21 16:26:19.240431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.270 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.270 [2024-07-21 16:26:19.251551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.270 [2024-07-21 16:26:19.251581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.270 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.270 [2024-07-21 16:26:19.260225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.270 [2024-07-21 16:26:19.260255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.270 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.270 [2024-07-21 16:26:19.269544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.270 [2024-07-21 16:26:19.269574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.270 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.270 [2024-07-21 16:26:19.278809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.270 [2024-07-21 16:26:19.278838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.270 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.270 [2024-07-21 16:26:19.287896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.270 [2024-07-21 16:26:19.287939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.270 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.270 [2024-07-21 16:26:19.297412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.270 [2024-07-21 16:26:19.297440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.270 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.270 [2024-07-21 16:26:19.306867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.270 [2024-07-21 16:26:19.306897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.270 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.270 [2024-07-21 16:26:19.315674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.270 [2024-07-21 16:26:19.315703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.270 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.270 [2024-07-21 16:26:19.324834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.270 [2024-07-21 16:26:19.324864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.270 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.270 [2024-07-21 16:26:19.333844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.270 [2024-07-21 16:26:19.333874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.270 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.270 [2024-07-21 16:26:19.343339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.270 [2024-07-21 16:26:19.343368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.270 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.270 [2024-07-21 16:26:19.352662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.270 [2024-07-21 16:26:19.352692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.270 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.270 [2024-07-21 16:26:19.362617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.270 [2024-07-21 16:26:19.362660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.270 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.270 [2024-07-21 16:26:19.372434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.270 [2024-07-21 16:26:19.372463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.270 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.270 [2024-07-21 16:26:19.383353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.270 [2024-07-21 16:26:19.383382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.270 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.270 [2024-07-21 16:26:19.400150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.270 [2024-07-21 16:26:19.400192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.270 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.270 [2024-07-21 16:26:19.417115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.270 [2024-07-21 16:26:19.417144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.270 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.270 [2024-07-21 16:26:19.428635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.270 [2024-07-21 16:26:19.428665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.270 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.270 [2024-07-21 16:26:19.436393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.270 [2024-07-21 16:26:19.436421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.270 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.270 [2024-07-21 16:26:19.447512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.271 [2024-07-21 16:26:19.447555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.271 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.271 [2024-07-21 16:26:19.456587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.271 [2024-07-21 16:26:19.456626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.271 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.271 [2024-07-21 16:26:19.467291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.271 [2024-07-21 16:26:19.467329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.271 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.271 [2024-07-21 16:26:19.475912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.271 [2024-07-21 16:26:19.475940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.533 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.533 [2024-07-21 16:26:19.484936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.533 [2024-07-21 16:26:19.484965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.533 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.533 [2024-07-21 16:26:19.493667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.533 [2024-07-21 16:26:19.493696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.533 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.533 [2024-07-21 16:26:19.502818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.533 [2024-07-21 16:26:19.502846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.533 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.533 [2024-07-21 16:26:19.511483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.533 [2024-07-21 16:26:19.511511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.533 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.533 [2024-07-21 16:26:19.520028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.533 [2024-07-21 16:26:19.520056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.534 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.534 [2024-07-21 16:26:19.528749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.534 [2024-07-21 16:26:19.528779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.534 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.534 [2024-07-21 16:26:19.537477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.534 [2024-07-21 16:26:19.537506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.534 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.534 [2024-07-21 16:26:19.546370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.534 [2024-07-21 16:26:19.546397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.534 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.534 [2024-07-21 16:26:19.555371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.534 [2024-07-21 16:26:19.555400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.534 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.534 [2024-07-21 16:26:19.564186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.534 [2024-07-21 16:26:19.564215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.534 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.534 [2024-07-21 16:26:19.572776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.534 [2024-07-21 16:26:19.572805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.534 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.534 [2024-07-21 16:26:19.581783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.534 [2024-07-21 16:26:19.581811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.534 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.534 [2024-07-21 16:26:19.590481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.534 [2024-07-21 16:26:19.590509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.534 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.534 [2024-07-21 16:26:19.599122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.534 [2024-07-21 16:26:19.599163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.534 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.534 [2024-07-21 16:26:19.607836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.534 [2024-07-21 16:26:19.607864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.534 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.534 [2024-07-21 16:26:19.616554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.534 [2024-07-21 16:26:19.616582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.534 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.534 [2024-07-21 16:26:19.625521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.534 [2024-07-21 16:26:19.625563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.534 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.534 [2024-07-21 16:26:19.634332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.534 [2024-07-21 16:26:19.634361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.534 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.534 [2024-07-21 16:26:19.643256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.534 [2024-07-21 16:26:19.643297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.534 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.534 [2024-07-21 16:26:19.652389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.534 [2024-07-21 16:26:19.652418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.534 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.534 [2024-07-21 16:26:19.660956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.534 [2024-07-21 16:26:19.660985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.534 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.534 [2024-07-21 16:26:19.669835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.534 [2024-07-21 16:26:19.669863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.534 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.534 [2024-07-21 16:26:19.679022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.534 [2024-07-21 16:26:19.679051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.534 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.534 [2024-07-21 16:26:19.687815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.534 [2024-07-21 16:26:19.687844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.534 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.534 [2024-07-21 16:26:19.696833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.534 [2024-07-21 16:26:19.696882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.534 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.534 [2024-07-21 16:26:19.706075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.534 [2024-07-21 16:26:19.706105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.534 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.534 [2024-07-21 16:26:19.715112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.534 [2024-07-21 16:26:19.715153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.534 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.534 [2024-07-21 16:26:19.724720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.534 [2024-07-21 16:26:19.724759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.534 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.534 [2024-07-21 16:26:19.733679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.534 [2024-07-21 16:26:19.733723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.534 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.793 [2024-07-21 16:26:19.746815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.793 [2024-07-21 16:26:19.746844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.793 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.793 [2024-07-21 16:26:19.755889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.793 [2024-07-21 16:26:19.755920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.793 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.793 [2024-07-21 16:26:19.764755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.793 [2024-07-21 16:26:19.764798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.793 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.793 [2024-07-21 16:26:19.773518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.793 [2024-07-21 16:26:19.773549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.793 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.793 [2024-07-21 16:26:19.782660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.793 [2024-07-21 16:26:19.782689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.793 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.793 [2024-07-21 16:26:19.791597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.793 [2024-07-21 16:26:19.791625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.793 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.793 [2024-07-21 16:26:19.800704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.793 [2024-07-21 16:26:19.800733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.793 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.793 [2024-07-21 16:26:19.810183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.793 [2024-07-21 16:26:19.810213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.793 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.793 [2024-07-21 16:26:19.819372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.793 [2024-07-21 16:26:19.819401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.793 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.793 [2024-07-21 16:26:19.828248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.793 [2024-07-21 16:26:19.828295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.793 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.793 [2024-07-21 16:26:19.837331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.793 [2024-07-21 16:26:19.837361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.794 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.794 [2024-07-21 16:26:19.846171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.794 [2024-07-21 16:26:19.846200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.794 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.794 [2024-07-21 16:26:19.855169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.794 [2024-07-21 16:26:19.855199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.794 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.794 [2024-07-21 16:26:19.864766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.794 [2024-07-21 16:26:19.864807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.794 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.794 [2024-07-21 16:26:19.874165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.794 [2024-07-21 16:26:19.874196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.794 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.794 [2024-07-21 16:26:19.883553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.794 [2024-07-21 16:26:19.883593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.794 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.794 [2024-07-21 16:26:19.892659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.794 [2024-07-21 16:26:19.892687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.794 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.794 [2024-07-21 16:26:19.901812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.794 [2024-07-21 16:26:19.901841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.794 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.794 [2024-07-21 16:26:19.911277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.794 [2024-07-21 16:26:19.911317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.794 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.794 [2024-07-21 16:26:19.920513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.794 [2024-07-21 16:26:19.920542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.794 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.794 [2024-07-21 16:26:19.929574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.794 [2024-07-21 16:26:19.929603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.794 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.794 [2024-07-21 16:26:19.938430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.794 [2024-07-21 16:26:19.938457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.794 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.794 [2024-07-21 16:26:19.947522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.794 [2024-07-21 16:26:19.947550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.794 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.794 [2024-07-21 16:26:19.956279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.794 [2024-07-21 16:26:19.956317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.794 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.794 [2024-07-21 16:26:19.965607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.794 [2024-07-21 16:26:19.965643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.794 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.794 [2024-07-21 16:26:19.974657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.794 [2024-07-21 16:26:19.974685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.794 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.794 [2024-07-21 16:26:19.983763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.794 [2024-07-21 16:26:19.983792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.794 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:01.794 [2024-07-21 16:26:19.992627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.794 [2024-07-21 16:26:19.992656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.794 2024/07/21 16:26:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.052 [2024-07-21 16:26:20.001814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.052 [2024-07-21 16:26:20.001842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.052 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.052 [2024-07-21 16:26:20.011142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.052 [2024-07-21 16:26:20.011171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.052 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.052 [2024-07-21 16:26:20.020426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.052 [2024-07-21 16:26:20.020455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.052 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.052 [2024-07-21 16:26:20.029690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.052 [2024-07-21 16:26:20.029722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.052 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.052 [2024-07-21 16:26:20.039021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.052 [2024-07-21 16:26:20.039049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.052 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.052 [2024-07-21 16:26:20.047829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.052 [2024-07-21 16:26:20.047860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.052 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.052 [2024-07-21 16:26:20.057128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.052 [2024-07-21 16:26:20.057171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.052 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.052 [2024-07-21 16:26:20.066339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.052 [2024-07-21 16:26:20.066369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.052 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.052 [2024-07-21 16:26:20.075529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.052 [2024-07-21 16:26:20.075559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.052 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.052 [2024-07-21 16:26:20.084542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.052 [2024-07-21 16:26:20.084571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.053 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.053 [2024-07-21 16:26:20.094183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.053 [2024-07-21 16:26:20.094213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.053 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.053 [2024-07-21 16:26:20.103474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.053 [2024-07-21 16:26:20.103502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.053 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.053 [2024-07-21 16:26:20.113869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.053 [2024-07-21 16:26:20.113906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.053 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.053 [2024-07-21 16:26:20.125754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.053 [2024-07-21 16:26:20.125789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.053 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.053 [2024-07-21 16:26:20.134254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.053 [2024-07-21 16:26:20.134293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.053 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.053 [2024-07-21 16:26:20.144628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.053 [2024-07-21 16:26:20.144658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.053 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.053 [2024-07-21 16:26:20.153346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.053 [2024-07-21 16:26:20.153377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.053 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.053 [2024-07-21 16:26:20.169582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.053 [2024-07-21 16:26:20.169622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.053 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.053 [2024-07-21 16:26:20.185950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.053 [2024-07-21 16:26:20.185991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.053 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.053 [2024-07-21 16:26:20.196526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.053 [2024-07-21 16:26:20.196568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.053 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.053 [2024-07-21 16:26:20.212039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.053 [2024-07-21 16:26:20.212073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.053 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.053 [2024-07-21 16:26:20.222917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.053 [2024-07-21 16:26:20.222947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.053 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.053 [2024-07-21 16:26:20.238518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.053 [2024-07-21 16:26:20.238559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.053 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.053 [2024-07-21 16:26:20.255630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.053 [2024-07-21 16:26:20.255675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.311 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.311 [2024-07-21 16:26:20.266169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.311 [2024-07-21 16:26:20.266199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.311 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.311 [2024-07-21 16:26:20.281743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.311 [2024-07-21 16:26:20.281785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.311 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.311 [2024-07-21 16:26:20.298760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.311 [2024-07-21 16:26:20.298802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.311 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.311 [2024-07-21 16:26:20.309163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.311 [2024-07-21 16:26:20.309192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.311 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.311 [2024-07-21 16:26:20.317224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.311 [2024-07-21 16:26:20.317253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.311 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.311 [2024-07-21 16:26:20.328675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.311 [2024-07-21 16:26:20.328717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.311 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.311 [2024-07-21 16:26:20.340221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.311 [2024-07-21 16:26:20.340271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.311 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.311 [2024-07-21 16:26:20.347880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.311 [2024-07-21 16:26:20.347909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.311 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.311 [2024-07-21 16:26:20.359515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.311 [2024-07-21 16:26:20.359545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.311 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.311 [2024-07-21 16:26:20.367712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.311 [2024-07-21 16:26:20.367741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.311 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.312 [2024-07-21 16:26:20.377989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.312 [2024-07-21 16:26:20.378020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.312 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.312 [2024-07-21 16:26:20.386473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.312 [2024-07-21 16:26:20.386515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.312 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.312 [2024-07-21 16:26:20.396368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.312 [2024-07-21 16:26:20.396397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.312 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.312 [2024-07-21 16:26:20.403993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.312 [2024-07-21 16:26:20.404022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.312 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.312 [2024-07-21 16:26:20.415601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.312 [2024-07-21 16:26:20.415643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.312 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.312 [2024-07-21 16:26:20.424774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.312 [2024-07-21 16:26:20.424803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.312 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.312 [2024-07-21 16:26:20.433828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.312 [2024-07-21 16:26:20.433884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.312 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.312 [2024-07-21 16:26:20.448328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.312 [2024-07-21 16:26:20.448360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.312 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.312 [2024-07-21 16:26:20.457078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.312 [2024-07-21 16:26:20.457108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.312 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.312 [2024-07-21 16:26:20.469226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.312 [2024-07-21 16:26:20.469491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.312 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.312 [2024-07-21 16:26:20.479900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.312 [2024-07-21 16:26:20.480068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.312 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.312 [2024-07-21 16:26:20.496846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.312 [2024-07-21 16:26:20.496996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.312 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.312 [2024-07-21 16:26:20.505839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.312 [2024-07-21 16:26:20.506021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.312 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.578 [2024-07-21 16:26:20.521621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.578 [2024-07-21 16:26:20.521795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.578 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.578 [2024-07-21 16:26:20.532493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.578 [2024-07-21 16:26:20.532643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.578 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.578 [2024-07-21 16:26:20.540832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.578 [2024-07-21 16:26:20.540862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.578 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.578 [2024-07-21 16:26:20.549682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.578 [2024-07-21 16:26:20.549831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.578 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.578 [2024-07-21 16:26:20.558415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.578 [2024-07-21 16:26:20.558567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.578 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.578 [2024-07-21 16:26:20.567647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.578 [2024-07-21 16:26:20.567812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.578 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.578 [2024-07-21 16:26:20.577759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.578 [2024-07-21 16:26:20.577927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.578 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.578 [2024-07-21 16:26:20.586213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.578 [2024-07-21 16:26:20.586411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.578 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.579 [2024-07-21 16:26:20.602324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.579 [2024-07-21 16:26:20.602493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.579 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.579 [2024-07-21 16:26:20.612939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.579 [2024-07-21 16:26:20.612970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.579 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.579 [2024-07-21 16:26:20.628407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.579 [2024-07-21 16:26:20.628433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.579 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.579 [2024-07-21 16:26:20.639237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.579 [2024-07-21 16:26:20.639287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.579 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.579 [2024-07-21 16:26:20.655622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.579 [2024-07-21 16:26:20.655661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.579 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.579 [2024-07-21 16:26:20.671628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.579 [2024-07-21 16:26:20.671656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.579 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.579 [2024-07-21 16:26:20.683147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.579 [2024-07-21 16:26:20.683174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.579 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.579 [2024-07-21 16:26:20.698349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.579 [2024-07-21 16:26:20.698376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.579 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.579 [2024-07-21 16:26:20.714792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.579 [2024-07-21 16:26:20.714818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.579 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.579 [2024-07-21 16:26:20.731516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.579 [2024-07-21 16:26:20.731543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.579 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.579 [2024-07-21 16:26:20.748313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.579 [2024-07-21 16:26:20.748340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.579 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.579 [2024-07-21 16:26:20.765553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.579 [2024-07-21 16:26:20.765582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.579 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.579 [2024-07-21 16:26:20.781442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.579 [2024-07-21 16:26:20.781468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.837 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.837 [2024-07-21 16:26:20.798381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.837 [2024-07-21 16:26:20.798410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.837 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.837 [2024-07-21 16:26:20.808924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.837 [2024-07-21 16:26:20.808950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.837 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.837 [2024-07-21 16:26:20.824930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.837 [2024-07-21 16:26:20.824956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.837 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.837 [2024-07-21 16:26:20.841295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.837 [2024-07-21 16:26:20.841321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.837 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.837 [2024-07-21 16:26:20.851919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.837 [2024-07-21 16:26:20.851945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.837 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.837 [2024-07-21 16:26:20.867732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.837 [2024-07-21 16:26:20.867771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.837 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.837 [2024-07-21 16:26:20.883808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.837 [2024-07-21 16:26:20.883836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.837 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.837 [2024-07-21 16:26:20.900708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.837 [2024-07-21 16:26:20.900735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.838 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.838 [2024-07-21 16:26:20.917323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.838 [2024-07-21 16:26:20.917348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.838 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.838 [2024-07-21 16:26:20.934181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.838 [2024-07-21 16:26:20.934216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.838 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.838 [2024-07-21 16:26:20.944945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.838 [2024-07-21 16:26:20.944971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.838 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.838 [2024-07-21 16:26:20.961825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.838 [2024-07-21 16:26:20.961852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.838 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.838 [2024-07-21 16:26:20.977388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.838 [2024-07-21 16:26:20.977433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.838 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.838 [2024-07-21 16:26:20.988050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.838 [2024-07-21 16:26:20.988077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.838 2024/07/21 16:26:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.838 [2024-07-21 16:26:21.003691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.838 [2024-07-21 16:26:21.003730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.838 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.838 [2024-07-21 16:26:21.019675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.838 [2024-07-21 16:26:21.019704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.838 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.838 [2024-07-21 16:26:21.031048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.838 [2024-07-21 16:26:21.031074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.838 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:02.838 [2024-07-21 16:26:21.039871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.838 [2024-07-21 16:26:21.039898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.838 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.096 [2024-07-21 16:26:21.048856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.096 [2024-07-21 16:26:21.048882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.096 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.096 [2024-07-21 16:26:21.058146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.096 [2024-07-21 16:26:21.058183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.096 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.096 [2024-07-21 16:26:21.067034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.096 [2024-07-21 16:26:21.067059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.096 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.096 [2024-07-21 16:26:21.075880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.096 [2024-07-21 16:26:21.075907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.096 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.096 [2024-07-21 16:26:21.084810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.096 [2024-07-21 16:26:21.084848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.096 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.096 [2024-07-21 16:26:21.093966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.096 [2024-07-21 16:26:21.093993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.096 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.096 [2024-07-21 16:26:21.102823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.096 [2024-07-21 16:26:21.102850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.096 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.096 [2024-07-21 16:26:21.116163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.096 [2024-07-21 16:26:21.116189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.096 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.096 [2024-07-21 16:26:21.124489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.096 [2024-07-21 16:26:21.124516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.096 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.097 [2024-07-21 16:26:21.140538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.097 [2024-07-21 16:26:21.140564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.097 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.097 [2024-07-21 16:26:21.151809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.097 [2024-07-21 16:26:21.151835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.097 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.097 [2024-07-21 16:26:21.167303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.097 [2024-07-21 16:26:21.167328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.097 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.097 [2024-07-21 16:26:21.177448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.097 [2024-07-21 16:26:21.177474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.097 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.097 [2024-07-21 16:26:21.194182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.097 [2024-07-21 16:26:21.194210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.097 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.097 [2024-07-21 16:26:21.208645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.097 [2024-07-21 16:26:21.208671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.097 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.097 [2024-07-21 16:26:21.219003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.097 [2024-07-21 16:26:21.219029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.097 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.097 [2024-07-21 16:26:21.234838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.097 [2024-07-21 16:26:21.234864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.097 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.097 [2024-07-21 16:26:21.250630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.097 [2024-07-21 16:26:21.250657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.097 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.097 [2024-07-21 16:26:21.267081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.097 [2024-07-21 16:26:21.267107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.097 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.097 [2024-07-21 16:26:21.283017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.097 [2024-07-21 16:26:21.283043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.097 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.097 [2024-07-21 16:26:21.297474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.097 [2024-07-21 16:26:21.297500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.097 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.354 [2024-07-21 16:26:21.307853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.354 [2024-07-21 16:26:21.307879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.354 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.354 [2024-07-21 16:26:21.324029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.354 [2024-07-21 16:26:21.324055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.354 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.355 [2024-07-21 16:26:21.340307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.355 [2024-07-21 16:26:21.340331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.355 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.355 [2024-07-21 16:26:21.356865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.355 [2024-07-21 16:26:21.356892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.355 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.355 [2024-07-21 16:26:21.373509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.355 [2024-07-21 16:26:21.373535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.355 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.355 [2024-07-21 16:26:21.389494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.355 [2024-07-21 16:26:21.389519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.355 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.355 [2024-07-21 16:26:21.403957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.355 [2024-07-21 16:26:21.403983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.355 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.355 [2024-07-21 16:26:21.419791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.355 [2024-07-21 16:26:21.419818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.355 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.355 [2024-07-21 16:26:21.436343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.355 [2024-07-21 16:26:21.436371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.355 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.355 [2024-07-21 16:26:21.453625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.355 [2024-07-21 16:26:21.453652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.355 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.355 [2024-07-21 16:26:21.468495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.355 [2024-07-21 16:26:21.468521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.355 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.355 [2024-07-21 16:26:21.479863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.355 [2024-07-21 16:26:21.479890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.355 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.355 [2024-07-21 16:26:21.495460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.355 [2024-07-21 16:26:21.495486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.355 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.355 [2024-07-21 16:26:21.511464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.355 [2024-07-21 16:26:21.511490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.355 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.355 [2024-07-21 16:26:21.528555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.355 [2024-07-21 16:26:21.528582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.355 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.355 [2024-07-21 16:26:21.544409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.355 [2024-07-21 16:26:21.544434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.355 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.355 [2024-07-21 16:26:21.561280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.355 [2024-07-21 16:26:21.561319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.613 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.613 [2024-07-21 16:26:21.578397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.613 [2024-07-21 16:26:21.578423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.613 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.613 [2024-07-21 16:26:21.595004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.613 [2024-07-21 16:26:21.595031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.613 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.613 [2024-07-21 16:26:21.611554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.613 [2024-07-21 16:26:21.611581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.613 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.613 [2024-07-21 16:26:21.627734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.613 [2024-07-21 16:26:21.627761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.613 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.613 [2024-07-21 16:26:21.644181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.613 [2024-07-21 16:26:21.644208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.613 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.613 [2024-07-21 16:26:21.656046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.613 [2024-07-21 16:26:21.656072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.613 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.613 [2024-07-21 16:26:21.672330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.613 [2024-07-21 16:26:21.672358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.613 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.613 [2024-07-21 16:26:21.688185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.613 [2024-07-21 16:26:21.688211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.613 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.613 [2024-07-21 16:26:21.704891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.613 [2024-07-21 16:26:21.704931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.613 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.613 [2024-07-21 16:26:21.721503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.613 [2024-07-21 16:26:21.721531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.613 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.613 [2024-07-21 16:26:21.737692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.613 [2024-07-21 16:26:21.737718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.613 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.613 [2024-07-21 16:26:21.754494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.613 [2024-07-21 16:26:21.754521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.613 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.613 [2024-07-21 16:26:21.771397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.613 [2024-07-21 16:26:21.771424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.613 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.613 [2024-07-21 16:26:21.787093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.613 [2024-07-21 16:26:21.787121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.613 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.613 [2024-07-21 16:26:21.805128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.613 [2024-07-21 16:26:21.805156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.613 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.871 [2024-07-21 16:26:21.820699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.871 [2024-07-21 16:26:21.820736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.871 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.871 [2024-07-21 16:26:21.838678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.871 [2024-07-21 16:26:21.838711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.871 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.871 [2024-07-21 16:26:21.852913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.871 [2024-07-21 16:26:21.852944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.871 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.871 [2024-07-21 16:26:21.868686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.871 [2024-07-21 16:26:21.868724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.871 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.871 [2024-07-21 16:26:21.886120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.871 [2024-07-21 16:26:21.886152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.871 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.871 [2024-07-21 16:26:21.903425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.872 [2024-07-21 16:26:21.903471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.872 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.872 [2024-07-21 16:26:21.919298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.872 [2024-07-21 16:26:21.919341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.872 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.872 [2024-07-21 16:26:21.936711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.872 [2024-07-21 16:26:21.936742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.872 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.872 [2024-07-21 16:26:21.952606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.872 [2024-07-21 16:26:21.952646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.872 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.872 [2024-07-21 16:26:21.970130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.872 [2024-07-21 16:26:21.970163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.872 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.872 [2024-07-21 16:26:21.986108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.872 [2024-07-21 16:26:21.986141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.872 2024/07/21 16:26:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.872 [2024-07-21 16:26:22.003125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.872 [2024-07-21 16:26:22.003155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.872 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.872 [2024-07-21 16:26:22.019038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.872 [2024-07-21 16:26:22.019070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.872 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.872 [2024-07-21 16:26:22.036629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.872 [2024-07-21 16:26:22.036668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.872 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.872 [2024-07-21 16:26:22.052971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.872 [2024-07-21 16:26:22.053001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.872 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:03.872 [2024-07-21 16:26:22.069343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.872 [2024-07-21 16:26:22.069368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.872 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.129 [2024-07-21 16:26:22.086907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.129 [2024-07-21 16:26:22.086950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.129 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.129 [2024-07-21 16:26:22.103368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.129 [2024-07-21 16:26:22.103399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.130 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.130 [2024-07-21 16:26:22.120963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.130 [2024-07-21 16:26:22.120998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.130 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.130 [2024-07-21 16:26:22.137377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.130 [2024-07-21 16:26:22.137423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.130 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.130 [2024-07-21 16:26:22.154501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.130 [2024-07-21 16:26:22.154531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.130 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.130 [2024-07-21 16:26:22.171515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.130 [2024-07-21 16:26:22.171545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.130 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.130 [2024-07-21 16:26:22.185921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.130 [2024-07-21 16:26:22.185953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.130 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.130 [2024-07-21 16:26:22.201757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.130 [2024-07-21 16:26:22.201787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.130 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.130 [2024-07-21 16:26:22.219125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.130 [2024-07-21 16:26:22.219155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.130 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.130 [2024-07-21 16:26:22.235332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.130 [2024-07-21 16:26:22.235371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.130 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.130 [2024-07-21 16:26:22.252626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.130 [2024-07-21 16:26:22.252656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.130 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.130 [2024-07-21 16:26:22.268911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.130 [2024-07-21 16:26:22.268942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.130 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.130 [2024-07-21 16:26:22.285773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.130 [2024-07-21 16:26:22.285804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.130 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.130 [2024-07-21 16:26:22.302185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.130 [2024-07-21 16:26:22.302215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.130 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.130 [2024-07-21 16:26:22.318928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.130 [2024-07-21 16:26:22.318957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.130 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.130 [2024-07-21 16:26:22.335534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.130 [2024-07-21 16:26:22.335566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.458 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.458 [2024-07-21 16:26:22.352123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.458 [2024-07-21 16:26:22.352154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.458 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.458 [2024-07-21 16:26:22.368563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.458 [2024-07-21 16:26:22.368593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.458 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.458 [2024-07-21 16:26:22.384927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.458 [2024-07-21 16:26:22.384959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.458 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.458 [2024-07-21 16:26:22.401931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.458 [2024-07-21 16:26:22.401962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.458 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.458 [2024-07-21 16:26:22.418985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.458 [2024-07-21 16:26:22.419015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.458 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.458 [2024-07-21 16:26:22.436381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.458 [2024-07-21 16:26:22.436411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.458 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.458 [2024-07-21 16:26:22.452630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.458 [2024-07-21 16:26:22.452661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.458 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.458 [2024-07-21 16:26:22.469868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.458 [2024-07-21 16:26:22.469908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.458 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.458 [2024-07-21 16:26:22.486233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.458 [2024-07-21 16:26:22.486274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.458 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.458 [2024-07-21 16:26:22.503692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.458 [2024-07-21 16:26:22.503723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.458 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.458 [2024-07-21 16:26:22.519793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.458 [2024-07-21 16:26:22.519836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.458 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.458 [2024-07-21 16:26:22.537179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.458 [2024-07-21 16:26:22.537211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.458 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.458 [2024-07-21 16:26:22.552307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.458 [2024-07-21 16:26:22.552336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.458 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.458 [2024-07-21 16:26:22.563636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.458 [2024-07-21 16:26:22.563675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.458 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.458 [2024-07-21 16:26:22.578941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.458 [2024-07-21 16:26:22.578972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.458 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.458 [2024-07-21 16:26:22.595320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.458 [2024-07-21 16:26:22.595352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.458 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.458 00:11:04.458 Latency(us) 00:11:04.458 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:04.458 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:04.458 Nvme1n1 : 5.01 13965.11 109.10 0.00 0.00 9153.74 4200.26 18469.24 00:11:04.458 =================================================================================================================== 00:11:04.458 Total : 13965.11 109.10 0.00 0.00 9153.74 4200.26 18469.24 00:11:04.458 [2024-07-21 16:26:22.604211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.458 [2024-07-21 16:26:22.604240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.458 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.458 [2024-07-21 16:26:22.616218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.458 [2024-07-21 16:26:22.616258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.458 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.458 [2024-07-21 16:26:22.628199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.458 [2024-07-21 16:26:22.628224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.458 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.458 [2024-07-21 16:26:22.640201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.458 [2024-07-21 16:26:22.640224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.458 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.458 [2024-07-21 16:26:22.652202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.458 [2024-07-21 16:26:22.652225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.458 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.458 [2024-07-21 16:26:22.664205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.458 [2024-07-21 16:26:22.664228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.715 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.715 [2024-07-21 16:26:22.676207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.715 [2024-07-21 16:26:22.676230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.715 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.715 [2024-07-21 16:26:22.688211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.715 [2024-07-21 16:26:22.688246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.715 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.715 [2024-07-21 16:26:22.700213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.715 [2024-07-21 16:26:22.700237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.715 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.715 [2024-07-21 16:26:22.712216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.715 [2024-07-21 16:26:22.712239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.715 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.715 [2024-07-21 16:26:22.724228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.715 [2024-07-21 16:26:22.724251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.715 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.715 [2024-07-21 16:26:22.736218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.715 [2024-07-21 16:26:22.736241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.715 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.715 [2024-07-21 16:26:22.748222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.715 [2024-07-21 16:26:22.748244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.715 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.715 [2024-07-21 16:26:22.760228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.715 [2024-07-21 16:26:22.760253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.715 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.715 [2024-07-21 16:26:22.772230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.715 [2024-07-21 16:26:22.772277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.715 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.715 [2024-07-21 16:26:22.784231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.715 [2024-07-21 16:26:22.784253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.715 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.715 [2024-07-21 16:26:22.796232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.715 [2024-07-21 16:26:22.796255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.715 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.715 [2024-07-21 16:26:22.808244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.715 [2024-07-21 16:26:22.808285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.715 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.715 [2024-07-21 16:26:22.820252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.715 [2024-07-21 16:26:22.820295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.716 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.716 [2024-07-21 16:26:22.832245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:04.716 [2024-07-21 16:26:22.832278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:04.716 2024/07/21 16:26:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:04.716 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (76447) - No such process 00:11:04.716 16:26:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 76447 00:11:04.716 16:26:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.716 16:26:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.716 16:26:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:04.716 16:26:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.716 16:26:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:04.716 16:26:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.716 16:26:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:04.716 delay0 00:11:04.716 16:26:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.716 16:26:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:04.716 16:26:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.716 16:26:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:04.716 16:26:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.716 16:26:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:04.973 [2024-07-21 16:26:23.023258] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:11.548 Initializing NVMe Controllers 00:11:11.548 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:11.548 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:11.548 Initialization complete. Launching workers. 00:11:11.548 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 372 00:11:11.548 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 659, failed to submit 33 00:11:11.548 success 446, unsuccess 213, failed 0 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:11.548 rmmod nvme_tcp 00:11:11.548 rmmod nvme_fabrics 00:11:11.548 rmmod nvme_keyring 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 76272 ']' 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 76272 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 76272 ']' 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 76272 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76272 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:11.548 killing process with pid 76272 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76272' 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 76272 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 76272 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:11.548 00:11:11.548 real 0m24.859s 00:11:11.548 user 0m39.193s 00:11:11.548 sys 0m7.373s 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:11.548 ************************************ 00:11:11.548 END TEST nvmf_zcopy 00:11:11.548 ************************************ 00:11:11.548 16:26:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:11.548 16:26:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:11.548 16:26:29 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:11.548 16:26:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:11.548 16:26:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:11.548 16:26:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:11.548 ************************************ 00:11:11.548 START TEST nvmf_nmic 00:11:11.548 ************************************ 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:11.548 * Looking for test storage... 00:11:11.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:11.548 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:11.805 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:11.805 Cannot find device "nvmf_tgt_br" 00:11:11.805 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:11:11.805 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:11.805 Cannot find device "nvmf_tgt_br2" 00:11:11.805 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:11:11.805 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:11.805 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:11.805 Cannot find device "nvmf_tgt_br" 00:11:11.805 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:11:11.805 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:11.805 Cannot find device "nvmf_tgt_br2" 00:11:11.805 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:11:11.805 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:11.805 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:11.805 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:11.805 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:11.805 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:11:11.805 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:11.805 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:11.805 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:11:11.805 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:11.805 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:11.805 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:11.806 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:11.806 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:11.806 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:11.806 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:11.806 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:11.806 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:11.806 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:11.806 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:11.806 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:11.806 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:11.806 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:11.806 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:11.806 16:26:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:11.806 16:26:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:11.806 16:26:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:11.806 16:26:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:12.062 16:26:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:12.062 16:26:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:12.062 16:26:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:12.062 16:26:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:12.062 16:26:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:12.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:12.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:11:12.062 00:11:12.062 --- 10.0.0.2 ping statistics --- 00:11:12.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.062 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:11:12.062 16:26:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:12.062 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:12.062 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:11:12.062 00:11:12.062 --- 10.0.0.3 ping statistics --- 00:11:12.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.062 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:11:12.062 16:26:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:12.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:12.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:11:12.062 00:11:12.062 --- 10.0.0.1 ping statistics --- 00:11:12.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.063 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:11:12.063 16:26:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:12.063 16:26:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:11:12.063 16:26:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:12.063 16:26:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:12.063 16:26:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:12.063 16:26:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:12.063 16:26:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:12.063 16:26:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:12.063 16:26:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:12.063 16:26:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:12.063 16:26:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:12.063 16:26:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:12.063 16:26:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:12.063 16:26:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=76772 00:11:12.063 16:26:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:12.063 16:26:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 76772 00:11:12.063 16:26:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 76772 ']' 00:11:12.063 16:26:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.063 16:26:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:12.063 16:26:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.063 16:26:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:12.063 16:26:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:12.063 [2024-07-21 16:26:30.143157] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:11:12.063 [2024-07-21 16:26:30.143278] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.320 [2024-07-21 16:26:30.287057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:12.320 [2024-07-21 16:26:30.407195] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:12.320 [2024-07-21 16:26:30.407295] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:12.320 [2024-07-21 16:26:30.407320] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:12.320 [2024-07-21 16:26:30.407331] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:12.320 [2024-07-21 16:26:30.407347] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:12.320 [2024-07-21 16:26:30.407534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.320 [2024-07-21 16:26:30.407680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:12.320 [2024-07-21 16:26:30.408225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:12.320 [2024-07-21 16:26:30.408230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.255 [2024-07-21 16:26:31.204730] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.255 Malloc0 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.255 [2024-07-21 16:26:31.274927] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.255 test case1: single bdev can't be used in multiple subsystems 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:13.255 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.256 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.256 [2024-07-21 16:26:31.298793] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:13.256 [2024-07-21 16:26:31.298832] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:13.256 [2024-07-21 16:26:31.298845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:13.256 2024/07/21 16:26:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:13.256 request: 00:11:13.256 { 00:11:13.256 "method": "nvmf_subsystem_add_ns", 00:11:13.256 "params": { 00:11:13.256 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:13.256 "namespace": { 00:11:13.256 "bdev_name": "Malloc0", 00:11:13.256 "no_auto_visible": false 00:11:13.256 } 00:11:13.256 } 00:11:13.256 } 00:11:13.256 Got JSON-RPC error response 00:11:13.256 GoRPCClient: error on JSON-RPC call 00:11:13.256 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:11:13.256 16:26:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:13.256 16:26:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:13.256 Adding namespace failed - expected result. 00:11:13.256 16:26:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:13.256 test case2: host connect to nvmf target in multiple paths 00:11:13.256 16:26:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:13.256 16:26:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:13.256 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.256 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:13.256 [2024-07-21 16:26:31.310928] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:13.256 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.256 16:26:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:13.513 16:26:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:13.513 16:26:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:13.513 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:11:13.513 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:13.513 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:13.513 16:26:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:16.050 16:26:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:16.050 16:26:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:16.050 16:26:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:16.050 16:26:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:16.050 16:26:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:16.050 16:26:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:16.050 16:26:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:16.050 [global] 00:11:16.050 thread=1 00:11:16.050 invalidate=1 00:11:16.050 rw=write 00:11:16.050 time_based=1 00:11:16.050 runtime=1 00:11:16.050 ioengine=libaio 00:11:16.050 direct=1 00:11:16.050 bs=4096 00:11:16.050 iodepth=1 00:11:16.050 norandommap=0 00:11:16.050 numjobs=1 00:11:16.050 00:11:16.050 verify_dump=1 00:11:16.050 verify_backlog=512 00:11:16.050 verify_state_save=0 00:11:16.050 do_verify=1 00:11:16.050 verify=crc32c-intel 00:11:16.050 [job0] 00:11:16.050 filename=/dev/nvme0n1 00:11:16.050 Could not set queue depth (nvme0n1) 00:11:16.050 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:16.050 fio-3.35 00:11:16.050 Starting 1 thread 00:11:16.986 00:11:16.986 job0: (groupid=0, jobs=1): err= 0: pid=76882: Sun Jul 21 16:26:34 2024 00:11:16.986 read: IOPS=2727, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1000msec) 00:11:16.986 slat (nsec): min=13159, max=80971, avg=18069.83, stdev=7068.31 00:11:16.986 clat (usec): min=121, max=435, avg=174.48, stdev=26.13 00:11:16.986 lat (usec): min=134, max=466, avg=192.55, stdev=27.36 00:11:16.986 clat percentiles (usec): 00:11:16.986 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 153], 00:11:16.986 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 178], 00:11:16.986 | 70.00th=[ 184], 80.00th=[ 194], 90.00th=[ 206], 95.00th=[ 221], 00:11:16.986 | 99.00th=[ 260], 99.50th=[ 273], 99.90th=[ 318], 99.95th=[ 392], 00:11:16.986 | 99.99th=[ 437] 00:11:16.986 write: IOPS=3072, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1000msec); 0 zone resets 00:11:16.986 slat (usec): min=18, max=129, avg=26.04, stdev=10.42 00:11:16.986 clat (usec): min=84, max=1565, avg=124.79, stdev=37.20 00:11:16.986 lat (usec): min=104, max=1635, avg=150.83, stdev=40.42 00:11:16.986 clat percentiles (usec): 00:11:16.986 | 1.00th=[ 91], 5.00th=[ 97], 10.00th=[ 100], 20.00th=[ 105], 00:11:16.986 | 30.00th=[ 111], 40.00th=[ 116], 50.00th=[ 121], 60.00th=[ 127], 00:11:16.986 | 70.00th=[ 133], 80.00th=[ 141], 90.00th=[ 153], 95.00th=[ 165], 00:11:16.986 | 99.00th=[ 194], 99.50th=[ 206], 99.90th=[ 247], 99.95th=[ 947], 00:11:16.986 | 99.99th=[ 1565] 00:11:16.986 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:11:16.986 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:16.986 lat (usec) : 100=5.40%, 250=93.95%, 500=0.62%, 1000=0.02% 00:11:16.986 lat (msec) : 2=0.02% 00:11:16.986 cpu : usr=2.60%, sys=9.30%, ctx=5801, majf=0, minf=2 00:11:16.986 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:16.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.986 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.986 issued rwts: total=2727,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.986 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:16.986 00:11:16.986 Run status group 0 (all jobs): 00:11:16.986 READ: bw=10.7MiB/s (11.2MB/s), 10.7MiB/s-10.7MiB/s (11.2MB/s-11.2MB/s), io=10.7MiB (11.2MB), run=1000-1000msec 00:11:16.986 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1000-1000msec 00:11:16.986 00:11:16.986 Disk stats (read/write): 00:11:16.986 nvme0n1: ios=2610/2586, merge=0/0, ticks=504/364, in_queue=868, util=91.48% 00:11:16.986 16:26:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:16.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:16.986 16:26:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:16.986 16:26:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:16.986 16:26:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:16.986 16:26:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.986 16:26:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:16.986 16:26:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.986 16:26:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:16.986 16:26:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:16.986 16:26:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:16.986 16:26:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:16.986 16:26:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:11:17.246 16:26:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:17.246 16:26:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:11:17.246 16:26:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:17.246 16:26:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:17.246 rmmod nvme_tcp 00:11:17.246 rmmod nvme_fabrics 00:11:17.246 rmmod nvme_keyring 00:11:17.246 16:26:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:17.246 16:26:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:11:17.246 16:26:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:11:17.246 16:26:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 76772 ']' 00:11:17.246 16:26:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 76772 00:11:17.246 16:26:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 76772 ']' 00:11:17.246 16:26:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 76772 00:11:17.246 16:26:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:11:17.246 16:26:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:17.246 16:26:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76772 00:11:17.246 16:26:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:17.246 16:26:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:17.246 killing process with pid 76772 00:11:17.246 16:26:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76772' 00:11:17.246 16:26:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 76772 00:11:17.246 16:26:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 76772 00:11:17.504 16:26:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:17.504 16:26:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:17.504 16:26:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:17.504 16:26:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:17.504 16:26:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:17.504 16:26:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.504 16:26:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:17.504 16:26:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.504 16:26:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:17.504 ************************************ 00:11:17.504 END TEST nvmf_nmic 00:11:17.504 ************************************ 00:11:17.504 00:11:17.504 real 0m5.969s 00:11:17.504 user 0m20.211s 00:11:17.504 sys 0m1.313s 00:11:17.504 16:26:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:17.504 16:26:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:17.504 16:26:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:17.504 16:26:35 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:17.504 16:26:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:17.504 16:26:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.504 16:26:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:17.504 ************************************ 00:11:17.504 START TEST nvmf_fio_target 00:11:17.504 ************************************ 00:11:17.504 16:26:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:17.763 * Looking for test storage... 00:11:17.763 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:17.763 Cannot find device "nvmf_tgt_br" 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:17.763 Cannot find device "nvmf_tgt_br2" 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:17.763 Cannot find device "nvmf_tgt_br" 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:17.763 Cannot find device "nvmf_tgt_br2" 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:17.763 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:17.763 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:17.764 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:11:17.764 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:17.764 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:17.764 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:11:17.764 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:17.764 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:17.764 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:17.764 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:17.764 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:17.764 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:17.764 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:18.022 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:18.022 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:18.022 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:18.022 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:18.022 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:18.022 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:18.022 16:26:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:18.022 16:26:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:18.022 16:26:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:18.022 16:26:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:18.022 16:26:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:18.022 16:26:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:18.022 16:26:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:18.022 16:26:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:18.022 16:26:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:18.022 16:26:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:18.022 16:26:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:18.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:11:18.022 00:11:18.022 --- 10.0.0.2 ping statistics --- 00:11:18.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.022 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:11:18.022 16:26:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:18.022 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:18.022 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:11:18.022 00:11:18.022 --- 10.0.0.3 ping statistics --- 00:11:18.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.022 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:11:18.022 16:26:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:18.022 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.022 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:11:18.022 00:11:18.022 --- 10.0.0.1 ping statistics --- 00:11:18.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.022 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:11:18.022 16:26:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.022 16:26:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:11:18.022 16:26:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:18.022 16:26:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.022 16:26:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:18.022 16:26:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:18.022 16:26:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.022 16:26:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:18.023 16:26:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:18.023 16:26:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:18.023 16:26:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:18.023 16:26:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:18.023 16:26:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.023 16:26:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=77065 00:11:18.023 16:26:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 77065 00:11:18.023 16:26:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 77065 ']' 00:11:18.023 16:26:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.023 16:26:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:18.023 16:26:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:18.023 16:26:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.023 16:26:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:18.023 16:26:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.023 [2024-07-21 16:26:36.175377] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:11:18.023 [2024-07-21 16:26:36.175466] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.281 [2024-07-21 16:26:36.314044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:18.281 [2024-07-21 16:26:36.426095] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.281 [2024-07-21 16:26:36.426401] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.281 [2024-07-21 16:26:36.426562] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.281 [2024-07-21 16:26:36.426808] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.281 [2024-07-21 16:26:36.426850] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.281 [2024-07-21 16:26:36.427412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.281 [2024-07-21 16:26:36.427497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:18.281 [2024-07-21 16:26:36.427733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.281 [2024-07-21 16:26:36.427581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:19.217 16:26:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:19.217 16:26:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:11:19.217 16:26:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:19.217 16:26:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:19.217 16:26:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.217 16:26:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.217 16:26:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:19.539 [2024-07-21 16:26:37.511953] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.539 16:26:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:19.818 16:26:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:19.818 16:26:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:20.077 16:26:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:20.077 16:26:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:20.336 16:26:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:20.336 16:26:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:20.593 16:26:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:20.593 16:26:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:20.850 16:26:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:21.107 16:26:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:21.107 16:26:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:21.365 16:26:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:21.365 16:26:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:21.634 16:26:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:21.634 16:26:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:21.893 16:26:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:22.151 16:26:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:22.151 16:26:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:22.408 16:26:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:22.408 16:26:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:22.665 16:26:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:22.923 [2024-07-21 16:26:41.036576] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.923 16:26:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:23.182 16:26:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:23.440 16:26:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:23.697 16:26:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:23.697 16:26:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:23.697 16:26:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:23.697 16:26:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:23.697 16:26:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:23.697 16:26:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:25.591 16:26:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:25.591 16:26:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:25.591 16:26:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:25.591 16:26:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:25.591 16:26:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:25.591 16:26:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:25.592 16:26:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:25.592 [global] 00:11:25.592 thread=1 00:11:25.592 invalidate=1 00:11:25.592 rw=write 00:11:25.592 time_based=1 00:11:25.592 runtime=1 00:11:25.592 ioengine=libaio 00:11:25.592 direct=1 00:11:25.592 bs=4096 00:11:25.592 iodepth=1 00:11:25.592 norandommap=0 00:11:25.592 numjobs=1 00:11:25.592 00:11:25.592 verify_dump=1 00:11:25.592 verify_backlog=512 00:11:25.592 verify_state_save=0 00:11:25.592 do_verify=1 00:11:25.592 verify=crc32c-intel 00:11:25.592 [job0] 00:11:25.592 filename=/dev/nvme0n1 00:11:25.592 [job1] 00:11:25.592 filename=/dev/nvme0n2 00:11:25.592 [job2] 00:11:25.592 filename=/dev/nvme0n3 00:11:25.592 [job3] 00:11:25.592 filename=/dev/nvme0n4 00:11:25.849 Could not set queue depth (nvme0n1) 00:11:25.849 Could not set queue depth (nvme0n2) 00:11:25.849 Could not set queue depth (nvme0n3) 00:11:25.849 Could not set queue depth (nvme0n4) 00:11:25.849 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:25.849 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:25.849 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:25.849 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:25.849 fio-3.35 00:11:25.849 Starting 4 threads 00:11:27.239 00:11:27.239 job0: (groupid=0, jobs=1): err= 0: pid=77358: Sun Jul 21 16:26:45 2024 00:11:27.239 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:11:27.239 slat (nsec): min=15216, max=48160, avg=18060.58, stdev=3370.40 00:11:27.239 clat (usec): min=176, max=578, avg=221.26, stdev=19.67 00:11:27.239 lat (usec): min=192, max=599, avg=239.32, stdev=19.91 00:11:27.239 clat percentiles (usec): 00:11:27.239 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:11:27.239 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 223], 00:11:27.239 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 245], 95.00th=[ 255], 00:11:27.239 | 99.00th=[ 281], 99.50th=[ 285], 99.90th=[ 306], 99.95th=[ 306], 00:11:27.239 | 99.99th=[ 578] 00:11:27.239 write: IOPS=2526, BW=9.87MiB/s (10.3MB/s)(9.88MiB/1001msec); 0 zone resets 00:11:27.239 slat (usec): min=21, max=379, avg=25.66, stdev= 8.99 00:11:27.239 clat (usec): min=117, max=591, avg=172.77, stdev=20.71 00:11:27.239 lat (usec): min=140, max=629, avg=198.43, stdev=23.12 00:11:27.239 clat percentiles (usec): 00:11:27.239 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:11:27.239 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 174], 00:11:27.239 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 202], 00:11:27.239 | 99.00th=[ 221], 99.50th=[ 239], 99.90th=[ 474], 99.95th=[ 515], 00:11:27.239 | 99.99th=[ 594] 00:11:27.239 bw ( KiB/s): min= 9816, max= 9816, per=30.10%, avg=9816.00, stdev= 0.00, samples=1 00:11:27.239 iops : min= 2454, max= 2454, avg=2454.00, stdev= 0.00, samples=1 00:11:27.239 lat (usec) : 250=96.88%, 500=3.06%, 750=0.07% 00:11:27.239 cpu : usr=1.50%, sys=7.60%, ctx=4578, majf=0, minf=11 00:11:27.239 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:27.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.239 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.239 issued rwts: total=2048,2529,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.239 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:27.239 job1: (groupid=0, jobs=1): err= 0: pid=77359: Sun Jul 21 16:26:45 2024 00:11:27.239 read: IOPS=1309, BW=5239KiB/s (5364kB/s)(5244KiB/1001msec) 00:11:27.239 slat (nsec): min=16244, max=51309, avg=20744.11, stdev=3432.77 00:11:27.239 clat (usec): min=182, max=1988, avg=378.49, stdev=63.71 00:11:27.239 lat (usec): min=201, max=2017, avg=399.23, stdev=64.03 00:11:27.239 clat percentiles (usec): 00:11:27.239 | 1.00th=[ 310], 5.00th=[ 330], 10.00th=[ 338], 20.00th=[ 351], 00:11:27.239 | 30.00th=[ 359], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 375], 00:11:27.239 | 70.00th=[ 388], 80.00th=[ 400], 90.00th=[ 420], 95.00th=[ 437], 00:11:27.239 | 99.00th=[ 529], 99.50th=[ 578], 99.90th=[ 1020], 99.95th=[ 1991], 00:11:27.239 | 99.99th=[ 1991] 00:11:27.239 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:27.239 slat (nsec): min=26248, max=90890, avg=36631.16, stdev=6900.18 00:11:27.239 clat (usec): min=129, max=969, avg=269.31, stdev=57.11 00:11:27.239 lat (usec): min=160, max=1000, avg=305.95, stdev=57.50 00:11:27.239 clat percentiles (usec): 00:11:27.239 | 1.00th=[ 165], 5.00th=[ 206], 10.00th=[ 217], 20.00th=[ 227], 00:11:27.239 | 30.00th=[ 235], 40.00th=[ 245], 50.00th=[ 258], 60.00th=[ 269], 00:11:27.239 | 70.00th=[ 285], 80.00th=[ 310], 90.00th=[ 351], 95.00th=[ 383], 00:11:27.239 | 99.00th=[ 420], 99.50th=[ 437], 99.90th=[ 537], 99.95th=[ 971], 00:11:27.239 | 99.99th=[ 971] 00:11:27.239 bw ( KiB/s): min= 7672, max= 7672, per=23.53%, avg=7672.00, stdev= 0.00, samples=1 00:11:27.239 iops : min= 1918, max= 1918, avg=1918.00, stdev= 0.00, samples=1 00:11:27.239 lat (usec) : 250=23.57%, 500=75.69%, 750=0.53%, 1000=0.14% 00:11:27.239 lat (msec) : 2=0.07% 00:11:27.239 cpu : usr=1.40%, sys=6.40%, ctx=2847, majf=0, minf=9 00:11:27.239 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:27.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.239 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.239 issued rwts: total=1311,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.239 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:27.239 job2: (groupid=0, jobs=1): err= 0: pid=77360: Sun Jul 21 16:26:45 2024 00:11:27.239 read: IOPS=2057, BW=8232KiB/s (8429kB/s)(8240KiB/1001msec) 00:11:27.239 slat (nsec): min=14324, max=49249, avg=17573.26, stdev=3160.23 00:11:27.239 clat (usec): min=165, max=541, avg=219.04, stdev=26.18 00:11:27.239 lat (usec): min=181, max=557, avg=236.61, stdev=26.15 00:11:27.239 clat percentiles (usec): 00:11:27.239 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 196], 00:11:27.239 | 30.00th=[ 204], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 223], 00:11:27.239 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 251], 95.00th=[ 265], 00:11:27.239 | 99.00th=[ 297], 99.50th=[ 306], 99.90th=[ 326], 99.95th=[ 363], 00:11:27.239 | 99.99th=[ 545] 00:11:27.239 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:27.239 slat (nsec): min=20616, max=89648, avg=25344.45, stdev=4916.96 00:11:27.239 clat (usec): min=116, max=271, avg=171.55, stdev=21.27 00:11:27.239 lat (usec): min=139, max=321, avg=196.89, stdev=21.75 00:11:27.239 clat percentiles (usec): 00:11:27.239 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 153], 00:11:27.239 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 176], 00:11:27.239 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 198], 95.00th=[ 210], 00:11:27.239 | 99.00th=[ 243], 99.50th=[ 251], 99.90th=[ 269], 99.95th=[ 273], 00:11:27.239 | 99.99th=[ 273] 00:11:27.239 bw ( KiB/s): min=10088, max=10088, per=30.93%, avg=10088.00, stdev= 0.00, samples=1 00:11:27.239 iops : min= 2522, max= 2522, avg=2522.00, stdev= 0.00, samples=1 00:11:27.239 lat (usec) : 250=95.00%, 500=4.98%, 750=0.02% 00:11:27.239 cpu : usr=2.10%, sys=7.10%, ctx=4625, majf=0, minf=6 00:11:27.239 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:27.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.239 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.239 issued rwts: total=2060,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.239 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:27.239 job3: (groupid=0, jobs=1): err= 0: pid=77361: Sun Jul 21 16:26:45 2024 00:11:27.239 read: IOPS=1301, BW=5207KiB/s (5332kB/s)(5212KiB/1001msec) 00:11:27.239 slat (nsec): min=18424, max=73478, avg=30842.34, stdev=7458.28 00:11:27.239 clat (usec): min=210, max=2266, avg=366.11, stdev=70.19 00:11:27.239 lat (usec): min=230, max=2290, avg=396.96, stdev=69.36 00:11:27.239 clat percentiles (usec): 00:11:27.239 | 1.00th=[ 293], 5.00th=[ 314], 10.00th=[ 322], 20.00th=[ 334], 00:11:27.239 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 359], 60.00th=[ 367], 00:11:27.239 | 70.00th=[ 375], 80.00th=[ 392], 90.00th=[ 416], 95.00th=[ 429], 00:11:27.239 | 99.00th=[ 490], 99.50th=[ 611], 99.90th=[ 1004], 99.95th=[ 2278], 00:11:27.239 | 99.99th=[ 2278] 00:11:27.239 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:27.239 slat (usec): min=26, max=136, avg=38.67, stdev= 7.71 00:11:27.239 clat (usec): min=140, max=1948, avg=269.43, stdev=68.17 00:11:27.239 lat (usec): min=170, max=1998, avg=308.10, stdev=68.58 00:11:27.239 clat percentiles (usec): 00:11:27.239 | 1.00th=[ 182], 5.00th=[ 210], 10.00th=[ 219], 20.00th=[ 229], 00:11:27.239 | 30.00th=[ 239], 40.00th=[ 247], 50.00th=[ 260], 60.00th=[ 269], 00:11:27.239 | 70.00th=[ 281], 80.00th=[ 302], 90.00th=[ 338], 95.00th=[ 375], 00:11:27.239 | 99.00th=[ 416], 99.50th=[ 429], 99.90th=[ 1037], 99.95th=[ 1942], 00:11:27.239 | 99.99th=[ 1942] 00:11:27.239 bw ( KiB/s): min= 7608, max= 7608, per=23.33%, avg=7608.00, stdev= 0.00, samples=1 00:11:27.239 iops : min= 1902, max= 1902, avg=1902.00, stdev= 0.00, samples=1 00:11:27.239 lat (usec) : 250=22.75%, 500=76.72%, 750=0.35%, 1000=0.04% 00:11:27.239 lat (msec) : 2=0.11%, 4=0.04% 00:11:27.239 cpu : usr=1.80%, sys=7.80%, ctx=2840, majf=0, minf=9 00:11:27.239 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:27.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.239 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.239 issued rwts: total=1303,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.239 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:27.239 00:11:27.240 Run status group 0 (all jobs): 00:11:27.240 READ: bw=26.2MiB/s (27.5MB/s), 5207KiB/s-8232KiB/s (5332kB/s-8429kB/s), io=26.3MiB (27.5MB), run=1001-1001msec 00:11:27.240 WRITE: bw=31.8MiB/s (33.4MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=31.9MiB (33.4MB), run=1001-1001msec 00:11:27.240 00:11:27.240 Disk stats (read/write): 00:11:27.240 nvme0n1: ios=1863/2048, merge=0/0, ticks=433/384, in_queue=817, util=86.67% 00:11:27.240 nvme0n2: ios=1054/1408, merge=0/0, ticks=423/403, in_queue=826, util=87.30% 00:11:27.240 nvme0n3: ios=1848/2048, merge=0/0, ticks=420/382, in_queue=802, util=89.13% 00:11:27.240 nvme0n4: ios=1024/1398, merge=0/0, ticks=391/414, in_queue=805, util=89.59% 00:11:27.240 16:26:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:27.240 [global] 00:11:27.240 thread=1 00:11:27.240 invalidate=1 00:11:27.240 rw=randwrite 00:11:27.240 time_based=1 00:11:27.240 runtime=1 00:11:27.240 ioengine=libaio 00:11:27.240 direct=1 00:11:27.240 bs=4096 00:11:27.240 iodepth=1 00:11:27.240 norandommap=0 00:11:27.240 numjobs=1 00:11:27.240 00:11:27.240 verify_dump=1 00:11:27.240 verify_backlog=512 00:11:27.240 verify_state_save=0 00:11:27.240 do_verify=1 00:11:27.240 verify=crc32c-intel 00:11:27.240 [job0] 00:11:27.240 filename=/dev/nvme0n1 00:11:27.240 [job1] 00:11:27.240 filename=/dev/nvme0n2 00:11:27.240 [job2] 00:11:27.240 filename=/dev/nvme0n3 00:11:27.240 [job3] 00:11:27.240 filename=/dev/nvme0n4 00:11:27.240 Could not set queue depth (nvme0n1) 00:11:27.240 Could not set queue depth (nvme0n2) 00:11:27.240 Could not set queue depth (nvme0n3) 00:11:27.240 Could not set queue depth (nvme0n4) 00:11:27.240 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:27.240 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:27.240 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:27.240 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:27.240 fio-3.35 00:11:27.240 Starting 4 threads 00:11:28.652 00:11:28.652 job0: (groupid=0, jobs=1): err= 0: pid=77420: Sun Jul 21 16:26:46 2024 00:11:28.652 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:28.652 slat (nsec): min=11239, max=70742, avg=18071.39, stdev=6712.98 00:11:28.652 clat (usec): min=170, max=1053, avg=315.27, stdev=55.70 00:11:28.652 lat (usec): min=182, max=1068, avg=333.34, stdev=58.59 00:11:28.652 clat percentiles (usec): 00:11:28.652 | 1.00th=[ 223], 5.00th=[ 243], 10.00th=[ 253], 20.00th=[ 265], 00:11:28.652 | 30.00th=[ 277], 40.00th=[ 289], 50.00th=[ 306], 60.00th=[ 334], 00:11:28.652 | 70.00th=[ 351], 80.00th=[ 367], 90.00th=[ 379], 95.00th=[ 396], 00:11:28.652 | 99.00th=[ 441], 99.50th=[ 506], 99.90th=[ 570], 99.95th=[ 1057], 00:11:28.652 | 99.99th=[ 1057] 00:11:28.652 write: IOPS=1717, BW=6869KiB/s (7034kB/s)(6876KiB/1001msec); 0 zone resets 00:11:28.652 slat (nsec): min=11374, max=85334, avg=28902.69, stdev=11659.24 00:11:28.652 clat (usec): min=113, max=8056, avg=250.60, stdev=271.38 00:11:28.652 lat (usec): min=133, max=8085, avg=279.50, stdev=272.78 00:11:28.652 clat percentiles (usec): 00:11:28.652 | 1.00th=[ 126], 5.00th=[ 141], 10.00th=[ 169], 20.00th=[ 202], 00:11:28.652 | 30.00th=[ 221], 40.00th=[ 237], 50.00th=[ 247], 60.00th=[ 258], 00:11:28.652 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 297], 00:11:28.652 | 99.00th=[ 367], 99.50th=[ 429], 99.90th=[ 6849], 99.95th=[ 8029], 00:11:28.652 | 99.99th=[ 8029] 00:11:28.652 bw ( KiB/s): min= 7504, max= 7504, per=23.89%, avg=7504.00, stdev= 0.00, samples=1 00:11:28.652 iops : min= 1876, max= 1876, avg=1876.00, stdev= 0.00, samples=1 00:11:28.652 lat (usec) : 250=31.52%, 500=68.05%, 750=0.25% 00:11:28.652 lat (msec) : 2=0.06%, 4=0.06%, 10=0.06% 00:11:28.652 cpu : usr=1.30%, sys=6.20%, ctx=3255, majf=0, minf=9 00:11:28.652 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:28.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.652 issued rwts: total=1536,1719,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.652 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:28.652 job1: (groupid=0, jobs=1): err= 0: pid=77421: Sun Jul 21 16:26:46 2024 00:11:28.652 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:28.652 slat (nsec): min=11445, max=50066, avg=16109.69, stdev=4313.38 00:11:28.652 clat (usec): min=197, max=509, avg=312.65, stdev=46.51 00:11:28.652 lat (usec): min=215, max=528, avg=328.76, stdev=45.39 00:11:28.652 clat percentiles (usec): 00:11:28.652 | 1.00th=[ 245], 5.00th=[ 253], 10.00th=[ 262], 20.00th=[ 269], 00:11:28.652 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 302], 60.00th=[ 310], 00:11:28.652 | 70.00th=[ 343], 80.00th=[ 367], 90.00th=[ 379], 95.00th=[ 392], 00:11:28.652 | 99.00th=[ 416], 99.50th=[ 429], 99.90th=[ 502], 99.95th=[ 510], 00:11:28.652 | 99.99th=[ 510] 00:11:28.652 write: IOPS=1788, BW=7153KiB/s (7325kB/s)(7160KiB/1001msec); 0 zone resets 00:11:28.652 slat (usec): min=15, max=111, avg=25.00, stdev= 6.74 00:11:28.652 clat (usec): min=120, max=724, avg=247.75, stdev=41.54 00:11:28.652 lat (usec): min=155, max=752, avg=272.75, stdev=40.01 00:11:28.652 clat percentiles (usec): 00:11:28.652 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 208], 00:11:28.652 | 30.00th=[ 217], 40.00th=[ 229], 50.00th=[ 245], 60.00th=[ 265], 00:11:28.652 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 310], 00:11:28.652 | 99.00th=[ 334], 99.50th=[ 355], 99.90th=[ 457], 99.95th=[ 725], 00:11:28.652 | 99.99th=[ 725] 00:11:28.652 bw ( KiB/s): min= 8192, max= 8192, per=26.09%, avg=8192.00, stdev= 0.00, samples=1 00:11:28.652 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:28.652 lat (usec) : 250=29.92%, 500=69.96%, 750=0.12% 00:11:28.652 cpu : usr=2.20%, sys=4.80%, ctx=3326, majf=0, minf=11 00:11:28.652 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:28.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.652 issued rwts: total=1536,1790,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.652 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:28.652 job2: (groupid=0, jobs=1): err= 0: pid=77422: Sun Jul 21 16:26:46 2024 00:11:28.652 read: IOPS=2055, BW=8224KiB/s (8421kB/s)(8232KiB/1001msec) 00:11:28.652 slat (nsec): min=11122, max=57172, avg=15978.99, stdev=4795.92 00:11:28.652 clat (usec): min=147, max=616, avg=230.87, stdev=64.55 00:11:28.652 lat (usec): min=162, max=632, avg=246.85, stdev=64.18 00:11:28.652 clat percentiles (usec): 00:11:28.652 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:11:28.652 | 30.00th=[ 178], 40.00th=[ 188], 50.00th=[ 202], 60.00th=[ 258], 00:11:28.652 | 70.00th=[ 273], 80.00th=[ 289], 90.00th=[ 314], 95.00th=[ 347], 00:11:28.652 | 99.00th=[ 383], 99.50th=[ 412], 99.90th=[ 519], 99.95th=[ 545], 00:11:28.652 | 99.99th=[ 619] 00:11:28.652 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:28.652 slat (nsec): min=11164, max=85983, avg=22983.38, stdev=7073.35 00:11:28.652 clat (usec): min=107, max=775, avg=166.02, stdev=50.43 00:11:28.652 lat (usec): min=127, max=792, avg=189.00, stdev=49.80 00:11:28.652 clat percentiles (usec): 00:11:28.652 | 1.00th=[ 114], 5.00th=[ 120], 10.00th=[ 123], 20.00th=[ 128], 00:11:28.652 | 30.00th=[ 133], 40.00th=[ 139], 50.00th=[ 145], 60.00th=[ 155], 00:11:28.652 | 70.00th=[ 180], 80.00th=[ 212], 90.00th=[ 245], 95.00th=[ 265], 00:11:28.652 | 99.00th=[ 302], 99.50th=[ 326], 99.90th=[ 371], 99.95th=[ 441], 00:11:28.652 | 99.99th=[ 775] 00:11:28.652 bw ( KiB/s): min=12288, max=12288, per=39.13%, avg=12288.00, stdev= 0.00, samples=1 00:11:28.652 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:28.652 lat (usec) : 250=76.33%, 500=23.58%, 750=0.06%, 1000=0.02% 00:11:28.652 cpu : usr=1.70%, sys=6.90%, ctx=4620, majf=0, minf=16 00:11:28.652 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:28.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.652 issued rwts: total=2058,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.652 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:28.652 job3: (groupid=0, jobs=1): err= 0: pid=77423: Sun Jul 21 16:26:46 2024 00:11:28.652 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:28.652 slat (nsec): min=14298, max=60939, avg=19412.58, stdev=5112.09 00:11:28.652 clat (usec): min=210, max=492, avg=309.25, stdev=42.48 00:11:28.652 lat (usec): min=227, max=535, avg=328.66, stdev=44.86 00:11:28.652 clat percentiles (usec): 00:11:28.652 | 1.00th=[ 245], 5.00th=[ 253], 10.00th=[ 260], 20.00th=[ 273], 00:11:28.652 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 310], 00:11:28.652 | 70.00th=[ 334], 80.00th=[ 355], 90.00th=[ 371], 95.00th=[ 383], 00:11:28.652 | 99.00th=[ 404], 99.50th=[ 416], 99.90th=[ 482], 99.95th=[ 494], 00:11:28.652 | 99.99th=[ 494] 00:11:28.652 write: IOPS=1788, BW=7153KiB/s (7325kB/s)(7160KiB/1001msec); 0 zone resets 00:11:28.652 slat (nsec): min=13282, max=79839, avg=28830.45, stdev=6581.85 00:11:28.652 clat (usec): min=124, max=642, avg=243.61, stdev=37.56 00:11:28.652 lat (usec): min=150, max=669, avg=272.44, stdev=39.08 00:11:28.652 clat percentiles (usec): 00:11:28.652 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 208], 00:11:28.652 | 30.00th=[ 217], 40.00th=[ 227], 50.00th=[ 241], 60.00th=[ 255], 00:11:28.652 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 302], 00:11:28.652 | 99.00th=[ 326], 99.50th=[ 347], 99.90th=[ 457], 99.95th=[ 644], 00:11:28.652 | 99.99th=[ 644] 00:11:28.652 bw ( KiB/s): min= 8192, max= 8192, per=26.09%, avg=8192.00, stdev= 0.00, samples=1 00:11:28.652 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:28.652 lat (usec) : 250=31.42%, 500=68.55%, 750=0.03% 00:11:28.652 cpu : usr=1.90%, sys=6.20%, ctx=3326, majf=0, minf=9 00:11:28.652 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:28.652 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.652 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.652 issued rwts: total=1536,1790,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.652 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:28.652 00:11:28.652 Run status group 0 (all jobs): 00:11:28.652 READ: bw=26.0MiB/s (27.3MB/s), 6138KiB/s-8224KiB/s (6285kB/s-8421kB/s), io=26.0MiB (27.3MB), run=1001-1001msec 00:11:28.652 WRITE: bw=30.7MiB/s (32.2MB/s), 6869KiB/s-9.99MiB/s (7034kB/s-10.5MB/s), io=30.7MiB (32.2MB), run=1001-1001msec 00:11:28.652 00:11:28.652 Disk stats (read/write): 00:11:28.652 nvme0n1: ios=1300/1536, merge=0/0, ticks=449/387, in_queue=836, util=88.37% 00:11:28.652 nvme0n2: ios=1321/1536, merge=0/0, ticks=405/368, in_queue=773, util=88.61% 00:11:28.652 nvme0n3: ios=2016/2048, merge=0/0, ticks=475/328, in_queue=803, util=89.48% 00:11:28.652 nvme0n4: ios=1300/1536, merge=0/0, ticks=415/395, in_queue=810, util=89.74% 00:11:28.652 16:26:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:28.652 [global] 00:11:28.652 thread=1 00:11:28.652 invalidate=1 00:11:28.652 rw=write 00:11:28.653 time_based=1 00:11:28.653 runtime=1 00:11:28.653 ioengine=libaio 00:11:28.653 direct=1 00:11:28.653 bs=4096 00:11:28.653 iodepth=128 00:11:28.653 norandommap=0 00:11:28.653 numjobs=1 00:11:28.653 00:11:28.653 verify_dump=1 00:11:28.653 verify_backlog=512 00:11:28.653 verify_state_save=0 00:11:28.653 do_verify=1 00:11:28.653 verify=crc32c-intel 00:11:28.653 [job0] 00:11:28.653 filename=/dev/nvme0n1 00:11:28.653 [job1] 00:11:28.653 filename=/dev/nvme0n2 00:11:28.653 [job2] 00:11:28.653 filename=/dev/nvme0n3 00:11:28.653 [job3] 00:11:28.653 filename=/dev/nvme0n4 00:11:28.653 Could not set queue depth (nvme0n1) 00:11:28.653 Could not set queue depth (nvme0n2) 00:11:28.653 Could not set queue depth (nvme0n3) 00:11:28.653 Could not set queue depth (nvme0n4) 00:11:28.653 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:28.653 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:28.653 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:28.653 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:28.653 fio-3.35 00:11:28.653 Starting 4 threads 00:11:30.026 00:11:30.026 job0: (groupid=0, jobs=1): err= 0: pid=77477: Sun Jul 21 16:26:47 2024 00:11:30.026 read: IOPS=4082, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1001msec) 00:11:30.026 slat (usec): min=7, max=5743, avg=120.54, stdev=579.17 00:11:30.026 clat (usec): min=817, max=19674, avg=15870.79, stdev=1802.66 00:11:30.026 lat (usec): min=830, max=21103, avg=15991.33, stdev=1722.46 00:11:30.026 clat percentiles (usec): 00:11:30.026 | 1.00th=[ 8160], 5.00th=[12911], 10.00th=[14877], 20.00th=[15401], 00:11:30.026 | 30.00th=[15664], 40.00th=[15926], 50.00th=[16188], 60.00th=[16450], 00:11:30.026 | 70.00th=[16581], 80.00th=[16712], 90.00th=[17171], 95.00th=[17433], 00:11:30.026 | 99.00th=[19268], 99.50th=[19530], 99.90th=[19792], 99.95th=[19792], 00:11:30.026 | 99.99th=[19792] 00:11:30.026 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:11:30.026 slat (usec): min=11, max=4712, avg=115.34, stdev=520.86 00:11:30.026 clat (usec): min=10694, max=18944, avg=15062.17, stdev=1768.46 00:11:30.026 lat (usec): min=10722, max=19001, avg=15177.51, stdev=1757.31 00:11:30.026 clat percentiles (usec): 00:11:30.026 | 1.00th=[11338], 5.00th=[12256], 10.00th=[12780], 20.00th=[13173], 00:11:30.026 | 30.00th=[13698], 40.00th=[14353], 50.00th=[15270], 60.00th=[16057], 00:11:30.026 | 70.00th=[16450], 80.00th=[16712], 90.00th=[17171], 95.00th=[17433], 00:11:30.026 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19006], 99.95th=[19006], 00:11:30.026 | 99.99th=[19006] 00:11:30.026 bw ( KiB/s): min=16384, max=16384, per=35.22%, avg=16384.00, stdev= 0.00, samples=1 00:11:30.026 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:11:30.026 lat (usec) : 1000=0.06% 00:11:30.026 lat (msec) : 4=0.05%, 10=0.73%, 20=99.16% 00:11:30.026 cpu : usr=3.80%, sys=12.20%, ctx=372, majf=0, minf=15 00:11:30.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:30.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:30.026 issued rwts: total=4087,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:30.026 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:30.026 job1: (groupid=0, jobs=1): err= 0: pid=77478: Sun Jul 21 16:26:47 2024 00:11:30.026 read: IOPS=1523, BW=6095KiB/s (6242kB/s)(6144KiB/1008msec) 00:11:30.026 slat (usec): min=8, max=20284, avg=282.03, stdev=1675.09 00:11:30.026 clat (usec): min=23352, max=53423, avg=35340.86, stdev=5604.86 00:11:30.026 lat (usec): min=23375, max=54981, avg=35622.89, stdev=5781.80 00:11:30.026 clat percentiles (usec): 00:11:30.026 | 1.00th=[23462], 5.00th=[24511], 10.00th=[27657], 20.00th=[32113], 00:11:30.026 | 30.00th=[32900], 40.00th=[33817], 50.00th=[34866], 60.00th=[35914], 00:11:30.026 | 70.00th=[38536], 80.00th=[40109], 90.00th=[41681], 95.00th=[44303], 00:11:30.027 | 99.00th=[47973], 99.50th=[48497], 99.90th=[52691], 99.95th=[53216], 00:11:30.027 | 99.99th=[53216] 00:11:30.027 write: IOPS=1847, BW=7389KiB/s (7566kB/s)(7448KiB/1008msec); 0 zone resets 00:11:30.027 slat (usec): min=10, max=26671, avg=298.60, stdev=1681.13 00:11:30.027 clat (usec): min=3685, max=92353, avg=38057.59, stdev=17215.37 00:11:30.027 lat (usec): min=9041, max=92412, avg=38356.20, stdev=17290.96 00:11:30.027 clat percentiles (usec): 00:11:30.027 | 1.00th=[11731], 5.00th=[21627], 10.00th=[25560], 20.00th=[27657], 00:11:30.027 | 30.00th=[29754], 40.00th=[31589], 50.00th=[32900], 60.00th=[33817], 00:11:30.027 | 70.00th=[37487], 80.00th=[42206], 90.00th=[68682], 95.00th=[81265], 00:11:30.027 | 99.00th=[91751], 99.50th=[92799], 99.90th=[92799], 99.95th=[92799], 00:11:30.027 | 99.99th=[92799] 00:11:30.027 bw ( KiB/s): min= 6472, max= 7414, per=14.92%, avg=6943.00, stdev=666.09, samples=2 00:11:30.027 iops : min= 1618, max= 1853, avg=1735.50, stdev=166.17, samples=2 00:11:30.027 lat (msec) : 4=0.03%, 10=0.24%, 20=1.97%, 50=89.99%, 100=7.77% 00:11:30.027 cpu : usr=1.69%, sys=5.46%, ctx=381, majf=0, minf=13 00:11:30.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:11:30.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:30.027 issued rwts: total=1536,1862,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:30.027 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:30.027 job2: (groupid=0, jobs=1): err= 0: pid=77479: Sun Jul 21 16:26:47 2024 00:11:30.027 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:11:30.027 slat (usec): min=6, max=5809, avg=137.83, stdev=732.42 00:11:30.027 clat (usec): min=12276, max=24634, avg=17951.03, stdev=1402.84 00:11:30.027 lat (usec): min=12301, max=25002, avg=18088.86, stdev=1513.69 00:11:30.027 clat percentiles (usec): 00:11:30.027 | 1.00th=[13698], 5.00th=[15795], 10.00th=[16450], 20.00th=[17171], 00:11:30.027 | 30.00th=[17433], 40.00th=[17957], 50.00th=[17957], 60.00th=[18220], 00:11:30.027 | 70.00th=[18482], 80.00th=[18744], 90.00th=[19268], 95.00th=[19792], 00:11:30.027 | 99.00th=[22938], 99.50th=[23462], 99.90th=[24249], 99.95th=[24511], 00:11:30.027 | 99.99th=[24511] 00:11:30.027 write: IOPS=3702, BW=14.5MiB/s (15.2MB/s)(14.5MiB/1004msec); 0 zone resets 00:11:30.027 slat (usec): min=13, max=6604, avg=128.27, stdev=632.44 00:11:30.027 clat (usec): min=477, max=24693, avg=16811.38, stdev=2436.45 00:11:30.027 lat (usec): min=5091, max=24739, avg=16939.65, stdev=2415.51 00:11:30.027 clat percentiles (usec): 00:11:30.027 | 1.00th=[ 6128], 5.00th=[12256], 10.00th=[13304], 20.00th=[15926], 00:11:30.027 | 30.00th=[16450], 40.00th=[16909], 50.00th=[17171], 60.00th=[17433], 00:11:30.027 | 70.00th=[17957], 80.00th=[18482], 90.00th=[19268], 95.00th=[19792], 00:11:30.027 | 99.00th=[21627], 99.50th=[22676], 99.90th=[23462], 99.95th=[24511], 00:11:30.027 | 99.99th=[24773] 00:11:30.027 bw ( KiB/s): min=12504, max=16248, per=30.90%, avg=14376.00, stdev=2647.41, samples=2 00:11:30.027 iops : min= 3126, max= 4062, avg=3594.00, stdev=661.85, samples=2 00:11:30.027 lat (usec) : 500=0.01% 00:11:30.027 lat (msec) : 10=0.58%, 20=94.97%, 50=4.44% 00:11:30.027 cpu : usr=3.79%, sys=11.37%, ctx=321, majf=0, minf=7 00:11:30.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:30.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:30.027 issued rwts: total=3584,3717,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:30.027 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:30.027 job3: (groupid=0, jobs=1): err= 0: pid=77480: Sun Jul 21 16:26:47 2024 00:11:30.027 read: IOPS=1889, BW=7560KiB/s (7741kB/s)(7620KiB/1008msec) 00:11:30.027 slat (usec): min=7, max=20055, avg=264.70, stdev=1673.76 00:11:30.027 clat (usec): min=1295, max=54068, avg=32853.81, stdev=5758.91 00:11:30.027 lat (usec): min=17025, max=54117, avg=33118.51, stdev=5877.74 00:11:30.027 clat percentiles (usec): 00:11:30.027 | 1.00th=[17171], 5.00th=[25822], 10.00th=[26870], 20.00th=[28443], 00:11:30.027 | 30.00th=[29492], 40.00th=[30540], 50.00th=[33162], 60.00th=[34341], 00:11:30.027 | 70.00th=[36963], 80.00th=[38011], 90.00th=[40109], 95.00th=[41681], 00:11:30.027 | 99.00th=[44827], 99.50th=[49021], 99.90th=[52167], 99.95th=[54264], 00:11:30.027 | 99.99th=[54264] 00:11:30.027 write: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec); 0 zone resets 00:11:30.027 slat (usec): min=10, max=14686, avg=236.89, stdev=1167.96 00:11:30.027 clat (usec): min=19770, max=52670, avg=31563.21, stdev=4929.28 00:11:30.027 lat (usec): min=19802, max=52704, avg=31800.10, stdev=4972.82 00:11:30.027 clat percentiles (usec): 00:11:30.027 | 1.00th=[21365], 5.00th=[23462], 10.00th=[24773], 20.00th=[27395], 00:11:30.027 | 30.00th=[28967], 40.00th=[30802], 50.00th=[31851], 60.00th=[32637], 00:11:30.027 | 70.00th=[33817], 80.00th=[35390], 90.00th=[38011], 95.00th=[39584], 00:11:30.027 | 99.00th=[44827], 99.50th=[46400], 99.90th=[51643], 99.95th=[51643], 00:11:30.027 | 99.99th=[52691] 00:11:30.027 bw ( KiB/s): min= 8192, max= 8208, per=17.63%, avg=8200.00, stdev=11.31, samples=2 00:11:30.027 iops : min= 2048, max= 2052, avg=2050.00, stdev= 2.83, samples=2 00:11:30.027 lat (msec) : 2=0.03%, 20=1.49%, 50=98.18%, 100=0.30% 00:11:30.027 cpu : usr=1.49%, sys=7.15%, ctx=381, majf=0, minf=15 00:11:30.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:11:30.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:30.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:30.027 issued rwts: total=1905,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:30.027 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:30.027 00:11:30.027 Run status group 0 (all jobs): 00:11:30.027 READ: bw=43.1MiB/s (45.2MB/s), 6095KiB/s-15.9MiB/s (6242kB/s-16.7MB/s), io=43.4MiB (45.5MB), run=1001-1008msec 00:11:30.027 WRITE: bw=45.4MiB/s (47.6MB/s), 7389KiB/s-16.0MiB/s (7566kB/s-16.8MB/s), io=45.8MiB (48.0MB), run=1001-1008msec 00:11:30.027 00:11:30.027 Disk stats (read/write): 00:11:30.027 nvme0n1: ios=3520/3584, merge=0/0, ticks=12908/11738, in_queue=24646, util=89.67% 00:11:30.027 nvme0n2: ios=1270/1536, merge=0/0, ticks=20980/23031, in_queue=44011, util=88.36% 00:11:30.027 nvme0n3: ios=3089/3201, merge=0/0, ticks=17085/15775, in_queue=32860, util=89.61% 00:11:30.027 nvme0n4: ios=1553/1858, merge=0/0, ticks=25076/26436, in_queue=51512, util=89.96% 00:11:30.027 16:26:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:30.027 [global] 00:11:30.027 thread=1 00:11:30.027 invalidate=1 00:11:30.027 rw=randwrite 00:11:30.027 time_based=1 00:11:30.027 runtime=1 00:11:30.027 ioengine=libaio 00:11:30.027 direct=1 00:11:30.027 bs=4096 00:11:30.027 iodepth=128 00:11:30.027 norandommap=0 00:11:30.027 numjobs=1 00:11:30.027 00:11:30.027 verify_dump=1 00:11:30.027 verify_backlog=512 00:11:30.027 verify_state_save=0 00:11:30.027 do_verify=1 00:11:30.027 verify=crc32c-intel 00:11:30.027 [job0] 00:11:30.027 filename=/dev/nvme0n1 00:11:30.027 [job1] 00:11:30.027 filename=/dev/nvme0n2 00:11:30.027 [job2] 00:11:30.027 filename=/dev/nvme0n3 00:11:30.027 [job3] 00:11:30.027 filename=/dev/nvme0n4 00:11:30.027 Could not set queue depth (nvme0n1) 00:11:30.027 Could not set queue depth (nvme0n2) 00:11:30.027 Could not set queue depth (nvme0n3) 00:11:30.027 Could not set queue depth (nvme0n4) 00:11:30.027 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:30.027 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:30.027 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:30.027 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:30.027 fio-3.35 00:11:30.027 Starting 4 threads 00:11:31.401 00:11:31.401 job0: (groupid=0, jobs=1): err= 0: pid=77537: Sun Jul 21 16:26:49 2024 00:11:31.401 read: IOPS=3269, BW=12.8MiB/s (13.4MB/s)(12.8MiB/1005msec) 00:11:31.401 slat (usec): min=2, max=10194, avg=149.92, stdev=697.52 00:11:31.401 clat (usec): min=620, max=37117, avg=18646.65, stdev=3388.31 00:11:31.401 lat (usec): min=7644, max=37130, avg=18796.57, stdev=3440.09 00:11:31.401 clat percentiles (usec): 00:11:31.401 | 1.00th=[ 8291], 5.00th=[13960], 10.00th=[16057], 20.00th=[17171], 00:11:31.401 | 30.00th=[17695], 40.00th=[18220], 50.00th=[18220], 60.00th=[18482], 00:11:31.401 | 70.00th=[18744], 80.00th=[19268], 90.00th=[22414], 95.00th=[25035], 00:11:31.401 | 99.00th=[33817], 99.50th=[35914], 99.90th=[36963], 99.95th=[36963], 00:11:31.401 | 99.99th=[36963] 00:11:31.401 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:11:31.401 slat (usec): min=3, max=8459, avg=136.02, stdev=657.62 00:11:31.401 clat (usec): min=11188, max=30649, avg=18322.41, stdev=2421.00 00:11:31.401 lat (usec): min=11204, max=30660, avg=18458.43, stdev=2494.47 00:11:31.401 clat percentiles (usec): 00:11:31.401 | 1.00th=[13042], 5.00th=[14746], 10.00th=[15795], 20.00th=[16581], 00:11:31.401 | 30.00th=[16909], 40.00th=[17433], 50.00th=[17957], 60.00th=[19006], 00:11:31.401 | 70.00th=[19530], 80.00th=[20055], 90.00th=[20841], 95.00th=[22414], 00:11:31.401 | 99.00th=[25822], 99.50th=[29492], 99.90th=[30540], 99.95th=[30540], 00:11:31.401 | 99.99th=[30540] 00:11:31.401 bw ( KiB/s): min=13768, max=14904, per=25.34%, avg=14336.00, stdev=803.27, samples=2 00:11:31.401 iops : min= 3442, max= 3726, avg=3584.00, stdev=200.82, samples=2 00:11:31.401 lat (usec) : 750=0.01% 00:11:31.401 lat (msec) : 10=0.61%, 20=81.54%, 50=17.83% 00:11:31.401 cpu : usr=3.59%, sys=7.67%, ctx=972, majf=0, minf=10 00:11:31.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:31.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:31.401 issued rwts: total=3286,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.401 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:31.401 job1: (groupid=0, jobs=1): err= 0: pid=77538: Sun Jul 21 16:26:49 2024 00:11:31.401 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:11:31.401 slat (usec): min=6, max=8107, avg=214.88, stdev=883.37 00:11:31.401 clat (usec): min=12686, max=54639, avg=27987.07, stdev=11106.65 00:11:31.401 lat (usec): min=14865, max=54655, avg=28201.95, stdev=11148.72 00:11:31.401 clat percentiles (usec): 00:11:31.401 | 1.00th=[14877], 5.00th=[16712], 10.00th=[18220], 20.00th=[19268], 00:11:31.401 | 30.00th=[19530], 40.00th=[20055], 50.00th=[21627], 60.00th=[26346], 00:11:31.401 | 70.00th=[33424], 80.00th=[40109], 90.00th=[46400], 95.00th=[47973], 00:11:31.401 | 99.00th=[53740], 99.50th=[54789], 99.90th=[54789], 99.95th=[54789], 00:11:31.401 | 99.99th=[54789] 00:11:31.401 write: IOPS=2973, BW=11.6MiB/s (12.2MB/s)(11.7MiB/1007msec); 0 zone resets 00:11:31.401 slat (usec): min=13, max=6981, avg=141.74, stdev=686.56 00:11:31.401 clat (usec): min=5881, max=32020, avg=18306.42, stdev=6226.59 00:11:31.401 lat (usec): min=7097, max=32038, avg=18448.15, stdev=6237.13 00:11:31.401 clat percentiles (usec): 00:11:31.401 | 1.00th=[10552], 5.00th=[12780], 10.00th=[13042], 20.00th=[13435], 00:11:31.401 | 30.00th=[13698], 40.00th=[13829], 50.00th=[14615], 60.00th=[17171], 00:11:31.401 | 70.00th=[22152], 80.00th=[26346], 90.00th=[28443], 95.00th=[29230], 00:11:31.401 | 99.00th=[30278], 99.50th=[31851], 99.90th=[31851], 99.95th=[32113], 00:11:31.401 | 99.99th=[32113] 00:11:31.401 bw ( KiB/s): min=10648, max=12288, per=20.27%, avg=11468.00, stdev=1159.66, samples=2 00:11:31.401 iops : min= 2662, max= 3072, avg=2867.00, stdev=289.91, samples=2 00:11:31.401 lat (msec) : 10=0.43%, 20=52.99%, 50=45.07%, 100=1.51% 00:11:31.401 cpu : usr=3.38%, sys=8.45%, ctx=272, majf=0, minf=13 00:11:31.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:31.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:31.401 issued rwts: total=2560,2994,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.401 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:31.401 job2: (groupid=0, jobs=1): err= 0: pid=77539: Sun Jul 21 16:26:49 2024 00:11:31.401 read: IOPS=3213, BW=12.6MiB/s (13.2MB/s)(12.7MiB/1008msec) 00:11:31.401 slat (usec): min=3, max=8474, avg=149.83, stdev=714.91 00:11:31.401 clat (usec): min=1012, max=35222, avg=18508.84, stdev=2888.52 00:11:31.401 lat (usec): min=7348, max=36212, avg=18658.67, stdev=2944.48 00:11:31.401 clat percentiles (usec): 00:11:31.401 | 1.00th=[ 9503], 5.00th=[14877], 10.00th=[16188], 20.00th=[17171], 00:11:31.401 | 30.00th=[17695], 40.00th=[17957], 50.00th=[18220], 60.00th=[18482], 00:11:31.401 | 70.00th=[18744], 80.00th=[19530], 90.00th=[21627], 95.00th=[23200], 00:11:31.401 | 99.00th=[31851], 99.50th=[32113], 99.90th=[33162], 99.95th=[33162], 00:11:31.401 | 99.99th=[35390] 00:11:31.401 write: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec); 0 zone resets 00:11:31.401 slat (usec): min=5, max=7758, avg=138.74, stdev=653.76 00:11:31.401 clat (usec): min=11093, max=32061, avg=18775.83, stdev=2561.29 00:11:31.401 lat (usec): min=11104, max=33862, avg=18914.57, stdev=2632.49 00:11:31.401 clat percentiles (usec): 00:11:31.401 | 1.00th=[13829], 5.00th=[15795], 10.00th=[16188], 20.00th=[16909], 00:11:31.401 | 30.00th=[17433], 40.00th=[17695], 50.00th=[18482], 60.00th=[19268], 00:11:31.401 | 70.00th=[19530], 80.00th=[20317], 90.00th=[21103], 95.00th=[23462], 00:11:31.401 | 99.00th=[29230], 99.50th=[30540], 99.90th=[31851], 99.95th=[32113], 00:11:31.401 | 99.99th=[32113] 00:11:31.401 bw ( KiB/s): min=13960, max=14741, per=25.36%, avg=14350.50, stdev=552.25, samples=2 00:11:31.401 iops : min= 3490, max= 3685, avg=3587.50, stdev=137.89, samples=2 00:11:31.401 lat (msec) : 2=0.01%, 10=0.62%, 20=79.14%, 50=20.23% 00:11:31.401 cpu : usr=3.28%, sys=8.14%, ctx=943, majf=0, minf=13 00:11:31.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:31.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:31.401 issued rwts: total=3239,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.401 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:31.401 job3: (groupid=0, jobs=1): err= 0: pid=77540: Sun Jul 21 16:26:49 2024 00:11:31.401 read: IOPS=3632, BW=14.2MiB/s (14.9MB/s)(14.2MiB/1004msec) 00:11:31.401 slat (usec): min=8, max=8362, avg=113.11, stdev=531.74 00:11:31.401 clat (usec): min=581, max=24360, avg=13816.03, stdev=2420.40 00:11:31.401 lat (usec): min=5264, max=24377, avg=13929.15, stdev=2460.91 00:11:31.401 clat percentiles (usec): 00:11:31.401 | 1.00th=[ 8029], 5.00th=[10814], 10.00th=[11863], 20.00th=[12256], 00:11:31.401 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13304], 60.00th=[14091], 00:11:31.401 | 70.00th=[14484], 80.00th=[15008], 90.00th=[16909], 95.00th=[19006], 00:11:31.401 | 99.00th=[21365], 99.50th=[23200], 99.90th=[24249], 99.95th=[24249], 00:11:31.401 | 99.99th=[24249] 00:11:31.401 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:11:31.401 slat (usec): min=11, max=5628, avg=135.86, stdev=466.66 00:11:31.401 clat (usec): min=8448, max=29854, avg=18663.61, stdev=5405.02 00:11:31.401 lat (usec): min=8471, max=29879, avg=18799.47, stdev=5438.37 00:11:31.401 clat percentiles (usec): 00:11:31.401 | 1.00th=[ 9372], 5.00th=[10945], 10.00th=[11207], 20.00th=[12256], 00:11:31.401 | 30.00th=[14222], 40.00th=[18482], 50.00th=[19268], 60.00th=[20055], 00:11:31.401 | 70.00th=[21627], 80.00th=[23987], 90.00th=[26084], 95.00th=[27132], 00:11:31.401 | 99.00th=[27657], 99.50th=[28181], 99.90th=[29754], 99.95th=[29754], 00:11:31.401 | 99.99th=[29754] 00:11:31.401 bw ( KiB/s): min=15864, max=16384, per=28.50%, avg=16124.00, stdev=367.70, samples=2 00:11:31.401 iops : min= 3966, max= 4096, avg=4031.00, stdev=91.92, samples=2 00:11:31.401 lat (usec) : 750=0.01% 00:11:31.401 lat (msec) : 10=2.27%, 20=75.18%, 50=22.54% 00:11:31.401 cpu : usr=4.59%, sys=11.57%, ctx=593, majf=0, minf=11 00:11:31.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:31.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:31.401 issued rwts: total=3647,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.401 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:31.401 00:11:31.401 Run status group 0 (all jobs): 00:11:31.401 READ: bw=49.3MiB/s (51.7MB/s), 9.93MiB/s-14.2MiB/s (10.4MB/s-14.9MB/s), io=49.7MiB (52.1MB), run=1004-1008msec 00:11:31.401 WRITE: bw=55.3MiB/s (57.9MB/s), 11.6MiB/s-15.9MiB/s (12.2MB/s-16.7MB/s), io=55.7MiB (58.4MB), run=1004-1008msec 00:11:31.401 00:11:31.401 Disk stats (read/write): 00:11:31.401 nvme0n1: ios=2789/3072, merge=0/0, ticks=23851/24318, in_queue=48169, util=85.56% 00:11:31.401 nvme0n2: ios=2289/2560, merge=0/0, ticks=14089/10255, in_queue=24344, util=86.37% 00:11:31.401 nvme0n3: ios=2685/3072, merge=0/0, ticks=23467/24775, in_queue=48242, util=87.87% 00:11:31.401 nvme0n4: ios=3072/3124, merge=0/0, ticks=20735/29236, in_queue=49971, util=89.54% 00:11:31.401 16:26:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:31.401 16:26:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=77560 00:11:31.401 16:26:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:31.401 16:26:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:31.401 [global] 00:11:31.401 thread=1 00:11:31.401 invalidate=1 00:11:31.401 rw=read 00:11:31.401 time_based=1 00:11:31.401 runtime=10 00:11:31.401 ioengine=libaio 00:11:31.401 direct=1 00:11:31.401 bs=4096 00:11:31.401 iodepth=1 00:11:31.401 norandommap=1 00:11:31.401 numjobs=1 00:11:31.401 00:11:31.401 [job0] 00:11:31.401 filename=/dev/nvme0n1 00:11:31.401 [job1] 00:11:31.402 filename=/dev/nvme0n2 00:11:31.402 [job2] 00:11:31.402 filename=/dev/nvme0n3 00:11:31.402 [job3] 00:11:31.402 filename=/dev/nvme0n4 00:11:31.402 Could not set queue depth (nvme0n1) 00:11:31.402 Could not set queue depth (nvme0n2) 00:11:31.402 Could not set queue depth (nvme0n3) 00:11:31.402 Could not set queue depth (nvme0n4) 00:11:31.402 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:31.402 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:31.402 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:31.402 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:31.402 fio-3.35 00:11:31.402 Starting 4 threads 00:11:34.683 16:26:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:34.683 fio: pid=77603, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:34.683 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=48967680, buflen=4096 00:11:34.683 16:26:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:34.940 fio: pid=77602, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:34.940 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=28590080, buflen=4096 00:11:34.940 16:26:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:34.940 16:26:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:35.196 fio: pid=77600, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:35.196 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=35442688, buflen=4096 00:11:35.196 16:26:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:35.197 16:26:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:35.454 fio: pid=77601, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:35.454 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=3244032, buflen=4096 00:11:35.454 00:11:35.454 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77600: Sun Jul 21 16:26:53 2024 00:11:35.454 read: IOPS=2487, BW=9949KiB/s (10.2MB/s)(33.8MiB/3479msec) 00:11:35.454 slat (usec): min=8, max=10492, avg=21.76, stdev=174.83 00:11:35.454 clat (usec): min=99, max=3788, avg=378.20, stdev=126.92 00:11:35.454 lat (usec): min=184, max=10841, avg=399.97, stdev=216.95 00:11:35.454 clat percentiles (usec): 00:11:35.454 | 1.00th=[ 192], 5.00th=[ 217], 10.00th=[ 227], 20.00th=[ 247], 00:11:35.454 | 30.00th=[ 277], 40.00th=[ 383], 50.00th=[ 408], 60.00th=[ 424], 00:11:35.454 | 70.00th=[ 441], 80.00th=[ 465], 90.00th=[ 502], 95.00th=[ 529], 00:11:35.454 | 99.00th=[ 611], 99.50th=[ 644], 99.90th=[ 1270], 99.95th=[ 1975], 00:11:35.454 | 99.99th=[ 3785] 00:11:35.454 bw ( KiB/s): min= 8216, max=15248, per=21.02%, avg=10117.33, stdev=2766.59, samples=6 00:11:35.454 iops : min= 2054, max= 3812, avg=2529.33, stdev=691.65, samples=6 00:11:35.454 lat (usec) : 100=0.01%, 250=21.61%, 500=68.40%, 750=9.81%, 1000=0.02% 00:11:35.454 lat (msec) : 2=0.09%, 4=0.05% 00:11:35.454 cpu : usr=1.29%, sys=3.65%, ctx=8673, majf=0, minf=1 00:11:35.454 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:35.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.454 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.454 issued rwts: total=8654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:35.454 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:35.454 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77601: Sun Jul 21 16:26:53 2024 00:11:35.454 read: IOPS=4616, BW=18.0MiB/s (18.9MB/s)(67.1MiB/3721msec) 00:11:35.454 slat (usec): min=13, max=8992, avg=19.66, stdev=137.00 00:11:35.454 clat (usec): min=88, max=4589, avg=195.52, stdev=53.07 00:11:35.454 lat (usec): min=138, max=9157, avg=215.18, stdev=147.94 00:11:35.454 clat percentiles (usec): 00:11:35.454 | 1.00th=[ 141], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 167], 00:11:35.454 | 30.00th=[ 176], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 200], 00:11:35.454 | 70.00th=[ 210], 80.00th=[ 221], 90.00th=[ 237], 95.00th=[ 249], 00:11:35.454 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 408], 99.95th=[ 807], 00:11:35.454 | 99.99th=[ 2409] 00:11:35.454 bw ( KiB/s): min=17017, max=19688, per=38.37%, avg=18465.29, stdev=1125.10, samples=7 00:11:35.454 iops : min= 4254, max= 4922, avg=4616.29, stdev=281.33, samples=7 00:11:35.454 lat (usec) : 100=0.01%, 250=95.34%, 500=4.56%, 750=0.02%, 1000=0.02% 00:11:35.454 lat (msec) : 2=0.02%, 4=0.01%, 10=0.01% 00:11:35.454 cpu : usr=1.32%, sys=6.13%, ctx=17188, majf=0, minf=1 00:11:35.454 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:35.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.454 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.454 issued rwts: total=17177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:35.454 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:35.454 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77602: Sun Jul 21 16:26:53 2024 00:11:35.454 read: IOPS=2182, BW=8730KiB/s (8940kB/s)(27.3MiB/3198msec) 00:11:35.454 slat (usec): min=8, max=7668, avg=23.48, stdev=126.17 00:11:35.454 clat (usec): min=146, max=9553, avg=432.20, stdev=179.44 00:11:35.454 lat (usec): min=165, max=9621, avg=455.68, stdev=220.20 00:11:35.454 clat percentiles (usec): 00:11:35.454 | 1.00th=[ 157], 5.00th=[ 169], 10.00th=[ 198], 20.00th=[ 371], 00:11:35.454 | 30.00th=[ 400], 40.00th=[ 420], 50.00th=[ 437], 60.00th=[ 457], 00:11:35.454 | 70.00th=[ 486], 80.00th=[ 523], 90.00th=[ 570], 95.00th=[ 603], 00:11:35.454 | 99.00th=[ 660], 99.50th=[ 685], 99.90th=[ 2212], 99.95th=[ 2999], 00:11:35.454 | 99.99th=[ 9503] 00:11:35.454 bw ( KiB/s): min= 6536, max= 8816, per=17.00%, avg=8181.33, stdev=828.31, samples=6 00:11:35.454 iops : min= 1634, max= 2204, avg=2045.33, stdev=207.08, samples=6 00:11:35.454 lat (usec) : 250=11.79%, 500=62.35%, 750=25.58%, 1000=0.06% 00:11:35.454 lat (msec) : 2=0.10%, 4=0.09%, 10=0.01% 00:11:35.454 cpu : usr=0.97%, sys=4.22%, ctx=6995, majf=0, minf=1 00:11:35.455 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:35.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.455 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.455 issued rwts: total=6981,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:35.455 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:35.455 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77603: Sun Jul 21 16:26:53 2024 00:11:35.455 read: IOPS=4036, BW=15.8MiB/s (16.5MB/s)(46.7MiB/2962msec) 00:11:35.455 slat (usec): min=13, max=127, avg=17.65, stdev= 5.86 00:11:35.455 clat (usec): min=162, max=2055, avg=228.35, stdev=47.56 00:11:35.455 lat (usec): min=179, max=2071, avg=246.00, stdev=48.44 00:11:35.455 clat percentiles (usec): 00:11:35.455 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 198], 00:11:35.455 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 225], 00:11:35.455 | 70.00th=[ 233], 80.00th=[ 247], 90.00th=[ 289], 95.00th=[ 330], 00:11:35.455 | 99.00th=[ 379], 99.50th=[ 396], 99.90th=[ 453], 99.95th=[ 553], 00:11:35.455 | 99.99th=[ 1045] 00:11:35.455 bw ( KiB/s): min=13040, max=17344, per=33.28%, avg=16014.40, stdev=1893.61, samples=5 00:11:35.455 iops : min= 3260, max= 4336, avg=4003.60, stdev=473.40, samples=5 00:11:35.455 lat (usec) : 250=81.83%, 500=18.11%, 750=0.03%, 1000=0.01% 00:11:35.455 lat (msec) : 2=0.01%, 4=0.01% 00:11:35.455 cpu : usr=0.98%, sys=5.71%, ctx=11956, majf=0, minf=1 00:11:35.455 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:35.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.455 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.455 issued rwts: total=11956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:35.455 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:35.455 00:11:35.455 Run status group 0 (all jobs): 00:11:35.455 READ: bw=47.0MiB/s (49.3MB/s), 8730KiB/s-18.0MiB/s (8940kB/s-18.9MB/s), io=175MiB (183MB), run=2962-3721msec 00:11:35.455 00:11:35.455 Disk stats (read/write): 00:11:35.455 nvme0n1: ios=8410/0, merge=0/0, ticks=3121/0, in_queue=3121, util=95.42% 00:11:35.455 nvme0n2: ios=16637/0, merge=0/0, ticks=3353/0, in_queue=3353, util=95.80% 00:11:35.455 nvme0n3: ios=6590/0, merge=0/0, ticks=2892/0, in_queue=2892, util=96.24% 00:11:35.455 nvme0n4: ios=11577/0, merge=0/0, ticks=2706/0, in_queue=2706, util=96.79% 00:11:35.455 16:26:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:35.455 16:26:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:35.712 16:26:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:35.712 16:26:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:35.970 16:26:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:35.970 16:26:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:35.970 16:26:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:35.970 16:26:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:36.535 16:26:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:36.535 16:26:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:36.535 16:26:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:36.535 16:26:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 77560 00:11:36.535 16:26:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:36.536 16:26:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:36.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.536 16:26:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:36.536 16:26:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:36.536 16:26:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:36.536 16:26:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.536 16:26:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:36.536 16:26:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.536 nvmf hotplug test: fio failed as expected 00:11:36.536 16:26:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:36.536 16:26:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:36.536 16:26:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:36.536 16:26:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:37.100 16:26:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:37.100 16:26:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:37.100 16:26:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:37.100 16:26:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:37.100 16:26:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:37.100 16:26:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:37.100 16:26:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:11:37.100 16:26:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:37.100 16:26:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:11:37.100 16:26:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:37.100 16:26:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:37.100 rmmod nvme_tcp 00:11:37.100 rmmod nvme_fabrics 00:11:37.100 rmmod nvme_keyring 00:11:37.100 16:26:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:37.100 16:26:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:11:37.100 16:26:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:11:37.100 16:26:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 77065 ']' 00:11:37.100 16:26:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 77065 00:11:37.100 16:26:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 77065 ']' 00:11:37.100 16:26:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 77065 00:11:37.100 16:26:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:11:37.100 16:26:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:37.100 16:26:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77065 00:11:37.100 killing process with pid 77065 00:11:37.100 16:26:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:37.100 16:26:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:37.100 16:26:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77065' 00:11:37.100 16:26:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 77065 00:11:37.100 16:26:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 77065 00:11:37.357 16:26:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:37.357 16:26:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:37.357 16:26:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:37.357 16:26:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:37.357 16:26:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:37.357 16:26:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.357 16:26:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:37.357 16:26:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.357 16:26:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:37.357 ************************************ 00:11:37.357 END TEST nvmf_fio_target 00:11:37.357 ************************************ 00:11:37.357 00:11:37.357 real 0m19.719s 00:11:37.357 user 1m15.992s 00:11:37.357 sys 0m8.344s 00:11:37.357 16:26:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:37.357 16:26:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.357 16:26:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:37.357 16:26:55 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:37.357 16:26:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:37.357 16:26:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:37.357 16:26:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:37.357 ************************************ 00:11:37.357 START TEST nvmf_bdevio 00:11:37.357 ************************************ 00:11:37.357 16:26:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:37.357 * Looking for test storage... 00:11:37.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:37.357 16:26:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:37.357 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:37.357 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:37.357 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:37.357 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:37.357 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:37.357 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:37.357 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:37.357 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:37.357 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:37.358 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:37.638 Cannot find device "nvmf_tgt_br" 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:37.638 Cannot find device "nvmf_tgt_br2" 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:37.638 Cannot find device "nvmf_tgt_br" 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:37.638 Cannot find device "nvmf_tgt_br2" 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:37.638 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:37.638 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:37.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:37.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:11:37.638 00:11:37.638 --- 10.0.0.2 ping statistics --- 00:11:37.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.638 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:37.638 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:37.638 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:11:37.638 00:11:37.638 --- 10.0.0.3 ping statistics --- 00:11:37.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.638 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:11:37.638 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:37.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:37.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:11:37.911 00:11:37.911 --- 10.0.0.1 ping statistics --- 00:11:37.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.911 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:11:37.911 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:37.911 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:11:37.911 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:37.911 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:37.911 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:37.911 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:37.911 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:37.911 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:37.911 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:37.911 16:26:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:37.911 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:37.911 16:26:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:37.911 16:26:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:37.911 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=77921 00:11:37.911 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:37.911 16:26:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 77921 00:11:37.911 16:26:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 77921 ']' 00:11:37.911 16:26:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.911 16:26:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:37.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.911 16:26:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.911 16:26:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:37.911 16:26:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:37.911 [2024-07-21 16:26:55.921546] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:11:37.911 [2024-07-21 16:26:55.921786] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.911 [2024-07-21 16:26:56.062082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:38.188 [2024-07-21 16:26:56.220623] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.188 [2024-07-21 16:26:56.220695] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.188 [2024-07-21 16:26:56.220709] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.188 [2024-07-21 16:26:56.220720] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.188 [2024-07-21 16:26:56.220729] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.188 [2024-07-21 16:26:56.220893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:38.188 [2024-07-21 16:26:56.221505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:38.188 [2024-07-21 16:26:56.221813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:38.188 [2024-07-21 16:26:56.221897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:38.753 16:26:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:38.753 16:26:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:11:38.753 16:26:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:38.753 16:26:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:38.753 16:26:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.753 16:26:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.753 16:26:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:38.753 16:26:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.753 16:26:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.753 [2024-07-21 16:26:56.901232] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:38.753 16:26:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.753 16:26:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:38.753 16:26:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.753 16:26:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.753 Malloc0 00:11:38.753 16:26:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.753 16:26:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:38.753 16:26:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.753 16:26:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.753 16:26:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.753 16:26:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:38.753 16:26:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.753 16:26:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:39.011 16:26:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.011 16:26:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:39.011 16:26:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.011 16:26:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:39.011 [2024-07-21 16:26:56.967188] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:39.011 16:26:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.011 16:26:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:39.011 16:26:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:39.011 16:26:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:39.011 16:26:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:39.011 16:26:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:39.011 16:26:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:39.011 { 00:11:39.011 "params": { 00:11:39.011 "name": "Nvme$subsystem", 00:11:39.011 "trtype": "$TEST_TRANSPORT", 00:11:39.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:39.011 "adrfam": "ipv4", 00:11:39.011 "trsvcid": "$NVMF_PORT", 00:11:39.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:39.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:39.011 "hdgst": ${hdgst:-false}, 00:11:39.011 "ddgst": ${ddgst:-false} 00:11:39.011 }, 00:11:39.011 "method": "bdev_nvme_attach_controller" 00:11:39.011 } 00:11:39.011 EOF 00:11:39.011 )") 00:11:39.011 16:26:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:39.011 16:26:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:39.011 16:26:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:39.011 16:26:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:39.011 "params": { 00:11:39.011 "name": "Nvme1", 00:11:39.011 "trtype": "tcp", 00:11:39.011 "traddr": "10.0.0.2", 00:11:39.011 "adrfam": "ipv4", 00:11:39.011 "trsvcid": "4420", 00:11:39.011 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:39.011 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:39.011 "hdgst": false, 00:11:39.011 "ddgst": false 00:11:39.011 }, 00:11:39.011 "method": "bdev_nvme_attach_controller" 00:11:39.011 }' 00:11:39.011 [2024-07-21 16:26:57.029496] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:11:39.011 [2024-07-21 16:26:57.029591] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77975 ] 00:11:39.011 [2024-07-21 16:26:57.170641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:39.268 [2024-07-21 16:26:57.271339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.268 [2024-07-21 16:26:57.271478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:39.268 [2024-07-21 16:26:57.271492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.268 I/O targets: 00:11:39.268 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:39.268 00:11:39.268 00:11:39.268 CUnit - A unit testing framework for C - Version 2.1-3 00:11:39.268 http://cunit.sourceforge.net/ 00:11:39.268 00:11:39.268 00:11:39.268 Suite: bdevio tests on: Nvme1n1 00:11:39.526 Test: blockdev write read block ...passed 00:11:39.526 Test: blockdev write zeroes read block ...passed 00:11:39.526 Test: blockdev write zeroes read no split ...passed 00:11:39.526 Test: blockdev write zeroes read split ...passed 00:11:39.526 Test: blockdev write zeroes read split partial ...passed 00:11:39.526 Test: blockdev reset ...[2024-07-21 16:26:57.572413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:39.526 [2024-07-21 16:26:57.572541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e29180 (9): Bad file descriptor 00:11:39.526 [2024-07-21 16:26:57.583521] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:39.526 passed 00:11:39.526 Test: blockdev write read 8 blocks ...passed 00:11:39.526 Test: blockdev write read size > 128k ...passed 00:11:39.526 Test: blockdev write read invalid size ...passed 00:11:39.526 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:39.526 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:39.526 Test: blockdev write read max offset ...passed 00:11:39.526 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:39.526 Test: blockdev writev readv 8 blocks ...passed 00:11:39.526 Test: blockdev writev readv 30 x 1block ...passed 00:11:39.783 Test: blockdev writev readv block ...passed 00:11:39.783 Test: blockdev writev readv size > 128k ...passed 00:11:39.783 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:39.783 Test: blockdev comparev and writev ...[2024-07-21 16:26:57.755921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:39.783 [2024-07-21 16:26:57.755989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:39.783 [2024-07-21 16:26:57.756009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:39.783 [2024-07-21 16:26:57.756020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:39.784 [2024-07-21 16:26:57.756492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:39.784 [2024-07-21 16:26:57.756521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:39.784 [2024-07-21 16:26:57.756538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:39.784 [2024-07-21 16:26:57.756549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:39.784 [2024-07-21 16:26:57.756966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:39.784 [2024-07-21 16:26:57.756994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:39.784 [2024-07-21 16:26:57.757011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:39.784 [2024-07-21 16:26:57.757021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:39.784 [2024-07-21 16:26:57.757481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:39.784 [2024-07-21 16:26:57.757513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:39.784 [2024-07-21 16:26:57.757534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:39.784 [2024-07-21 16:26:57.757555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:39.784 passed 00:11:39.784 Test: blockdev nvme passthru rw ...passed 00:11:39.784 Test: blockdev nvme passthru vendor specific ...[2024-07-21 16:26:57.840619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:39.784 [2024-07-21 16:26:57.840651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:39.784 [2024-07-21 16:26:57.840812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:39.784 [2024-07-21 16:26:57.840850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:39.784 [2024-07-21 16:26:57.840971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:39.784 [2024-07-21 16:26:57.840994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:39.784 [2024-07-21 16:26:57.841119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:39.784 [2024-07-21 16:26:57.841145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:39.784 passed 00:11:39.784 Test: blockdev nvme admin passthru ...passed 00:11:39.784 Test: blockdev copy ...passed 00:11:39.784 00:11:39.784 Run Summary: Type Total Ran Passed Failed Inactive 00:11:39.784 suites 1 1 n/a 0 0 00:11:39.784 tests 23 23 23 0 0 00:11:39.784 asserts 152 152 152 0 n/a 00:11:39.784 00:11:39.784 Elapsed time = 0.884 seconds 00:11:40.041 16:26:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:40.041 16:26:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.041 16:26:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:40.041 16:26:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.041 16:26:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:40.041 16:26:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:40.041 16:26:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:40.041 16:26:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:40.041 16:26:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:40.041 16:26:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:40.041 16:26:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:40.041 16:26:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:40.041 rmmod nvme_tcp 00:11:40.041 rmmod nvme_fabrics 00:11:40.041 rmmod nvme_keyring 00:11:40.041 16:26:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:40.041 16:26:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:40.041 16:26:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:40.041 16:26:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 77921 ']' 00:11:40.041 16:26:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 77921 00:11:40.041 16:26:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 77921 ']' 00:11:40.041 16:26:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 77921 00:11:40.041 16:26:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:11:40.041 16:26:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:40.041 16:26:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77921 00:11:40.041 16:26:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:11:40.041 16:26:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:11:40.041 killing process with pid 77921 00:11:40.041 16:26:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77921' 00:11:40.041 16:26:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 77921 00:11:40.041 16:26:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 77921 00:11:40.298 16:26:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:40.298 16:26:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:40.298 16:26:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:40.298 16:26:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:40.298 16:26:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:40.298 16:26:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.298 16:26:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:40.298 16:26:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.298 16:26:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:40.555 ************************************ 00:11:40.555 END TEST nvmf_bdevio 00:11:40.555 ************************************ 00:11:40.555 00:11:40.555 real 0m3.083s 00:11:40.555 user 0m10.958s 00:11:40.555 sys 0m0.854s 00:11:40.555 16:26:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:40.555 16:26:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:40.555 16:26:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:40.555 16:26:58 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:40.555 16:26:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:40.555 16:26:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:40.555 16:26:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:40.555 ************************************ 00:11:40.555 START TEST nvmf_auth_target 00:11:40.555 ************************************ 00:11:40.555 16:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:40.555 * Looking for test storage... 00:11:40.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:40.555 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:40.555 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:11:40.555 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:40.555 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:40.555 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:40.555 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:40.555 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:40.555 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:40.555 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:40.555 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:40.555 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:40.555 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:40.555 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:11:40.555 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:11:40.555 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:40.555 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:40.555 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:40.555 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:40.555 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:40.555 16:26:58 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.555 16:26:58 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.555 16:26:58 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:40.556 Cannot find device "nvmf_tgt_br" 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:40.556 Cannot find device "nvmf_tgt_br2" 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:40.556 Cannot find device "nvmf_tgt_br" 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:40.556 Cannot find device "nvmf_tgt_br2" 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:11:40.556 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:40.813 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:40.813 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:40.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:40.813 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:11:40.813 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:40.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:40.813 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:11:40.813 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:40.813 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:40.813 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:40.813 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:40.813 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:40.813 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:40.813 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:40.813 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:40.813 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:40.813 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:40.813 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:40.813 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:40.813 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:40.813 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:40.813 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:40.813 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:40.813 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:40.813 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:40.813 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:40.813 16:26:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:40.813 16:26:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:40.813 16:26:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:41.071 16:26:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:41.071 16:26:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:41.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:41.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:11:41.071 00:11:41.071 --- 10.0.0.2 ping statistics --- 00:11:41.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.071 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:11:41.071 16:26:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:41.071 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:41.071 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:11:41.071 00:11:41.071 --- 10.0.0.3 ping statistics --- 00:11:41.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.071 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:11:41.071 16:26:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:41.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:41.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:11:41.071 00:11:41.071 --- 10.0.0.1 ping statistics --- 00:11:41.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.071 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:11:41.071 16:26:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:41.071 16:26:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:11:41.071 16:26:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:41.071 16:26:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:41.071 16:26:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:41.071 16:26:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:41.071 16:26:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:41.071 16:26:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:41.071 16:26:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:41.071 16:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:11:41.071 16:26:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:41.071 16:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:41.071 16:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.071 16:26:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=78154 00:11:41.071 16:26:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 78154 00:11:41.071 16:26:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:11:41.071 16:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 78154 ']' 00:11:41.071 16:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.071 16:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:41.071 16:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.071 16:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:41.071 16:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.003 16:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:42.003 16:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:11:42.003 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:42.003 16:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:42.003 16:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.003 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.003 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=78198 00:11:42.003 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:11:42.003 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:42.003 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:11:42.003 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:42.003 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:42.003 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:42.003 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:11:42.003 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:11:42.004 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:42.004 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bc1916dbfca739cb21bb416e0f3e0bc6b2337c6fa7c2ee83 00:11:42.004 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:11:42.004 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ILp 00:11:42.004 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bc1916dbfca739cb21bb416e0f3e0bc6b2337c6fa7c2ee83 0 00:11:42.004 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bc1916dbfca739cb21bb416e0f3e0bc6b2337c6fa7c2ee83 0 00:11:42.004 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:42.004 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:42.004 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bc1916dbfca739cb21bb416e0f3e0bc6b2337c6fa7c2ee83 00:11:42.004 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:11:42.004 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ILp 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ILp 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.ILp 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9450d884d99221f7d322e0c89e4ba2107030fa3585e2a62abf59b2a1492c210c 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.qaB 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9450d884d99221f7d322e0c89e4ba2107030fa3585e2a62abf59b2a1492c210c 3 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9450d884d99221f7d322e0c89e4ba2107030fa3585e2a62abf59b2a1492c210c 3 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9450d884d99221f7d322e0c89e4ba2107030fa3585e2a62abf59b2a1492c210c 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.qaB 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.qaB 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.qaB 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=09c88b0c907dd09551a79eceb572beff 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.HWP 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 09c88b0c907dd09551a79eceb572beff 1 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 09c88b0c907dd09551a79eceb572beff 1 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=09c88b0c907dd09551a79eceb572beff 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.HWP 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.HWP 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.HWP 00:11:42.261 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a4cc904c1e5663f336580099952fe44535a9d82d62ca7922 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ynC 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a4cc904c1e5663f336580099952fe44535a9d82d62ca7922 2 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a4cc904c1e5663f336580099952fe44535a9d82d62ca7922 2 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a4cc904c1e5663f336580099952fe44535a9d82d62ca7922 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ynC 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ynC 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.ynC 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b5a65cc039c4ee77ac7eb830d9adec8b5a88115621e0ae43 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.uoP 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b5a65cc039c4ee77ac7eb830d9adec8b5a88115621e0ae43 2 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b5a65cc039c4ee77ac7eb830d9adec8b5a88115621e0ae43 2 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b5a65cc039c4ee77ac7eb830d9adec8b5a88115621e0ae43 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.uoP 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.uoP 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.uoP 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:11:42.262 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:42.520 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ce9c1cd801ad579f98783c89fb3cbf33 00:11:42.520 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:11:42.520 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.fDA 00:11:42.520 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ce9c1cd801ad579f98783c89fb3cbf33 1 00:11:42.520 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ce9c1cd801ad579f98783c89fb3cbf33 1 00:11:42.520 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:42.520 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:42.520 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ce9c1cd801ad579f98783c89fb3cbf33 00:11:42.520 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:11:42.520 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:42.520 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.fDA 00:11:42.520 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.fDA 00:11:42.520 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.fDA 00:11:42.520 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:11:42.520 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:42.520 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:42.520 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:42.521 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:11:42.521 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:11:42.521 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:42.521 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4f4d49d816a92ac057d5e132cb5ca459958f2dfd27af6895cbf477a29baea543 00:11:42.521 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:11:42.521 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.TNZ 00:11:42.521 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4f4d49d816a92ac057d5e132cb5ca459958f2dfd27af6895cbf477a29baea543 3 00:11:42.521 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4f4d49d816a92ac057d5e132cb5ca459958f2dfd27af6895cbf477a29baea543 3 00:11:42.521 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:42.521 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:42.521 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4f4d49d816a92ac057d5e132cb5ca459958f2dfd27af6895cbf477a29baea543 00:11:42.521 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:11:42.521 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:42.521 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.TNZ 00:11:42.521 16:27:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.TNZ 00:11:42.521 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.TNZ 00:11:42.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.521 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:11:42.521 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 78154 00:11:42.521 16:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 78154 ']' 00:11:42.521 16:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.521 16:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:42.521 16:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.521 16:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:42.521 16:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.779 16:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:42.779 16:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:11:42.779 16:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 78198 /var/tmp/host.sock 00:11:42.779 16:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 78198 ']' 00:11:42.779 16:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:11:42.779 16:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:42.779 16:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:42.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:42.779 16:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:42.779 16:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.037 16:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:43.037 16:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:11:43.037 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:11:43.037 16:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.037 16:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.037 16:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.037 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:43.037 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ILp 00:11:43.037 16:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.037 16:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.037 16:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.037 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.ILp 00:11:43.037 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.ILp 00:11:43.295 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.qaB ]] 00:11:43.295 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.qaB 00:11:43.295 16:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.295 16:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.295 16:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.295 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.qaB 00:11:43.295 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.qaB 00:11:43.552 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:43.552 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.HWP 00:11:43.552 16:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.552 16:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.552 16:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.552 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.HWP 00:11:43.552 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.HWP 00:11:43.811 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.ynC ]] 00:11:43.811 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ynC 00:11:43.811 16:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.811 16:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.811 16:27:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.811 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ynC 00:11:43.811 16:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ynC 00:11:44.069 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:44.069 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.uoP 00:11:44.069 16:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.069 16:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.069 16:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.069 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.uoP 00:11:44.069 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.uoP 00:11:44.635 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.fDA ]] 00:11:44.635 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.fDA 00:11:44.635 16:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.635 16:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.635 16:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.635 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.fDA 00:11:44.635 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.fDA 00:11:44.635 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:44.635 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.TNZ 00:11:44.635 16:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.635 16:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.635 16:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.635 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.TNZ 00:11:44.635 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.TNZ 00:11:44.893 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:11:44.893 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:44.893 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:44.893 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:44.893 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:44.893 16:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:45.151 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:11:45.151 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:45.151 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:45.151 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:45.151 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:45.151 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.151 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.151 16:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.151 16:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.151 16:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.151 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.151 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.409 00:11:45.409 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:45.409 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:45.409 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.666 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.666 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.666 16:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.666 16:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.666 16:27:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.666 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:45.666 { 00:11:45.666 "auth": { 00:11:45.666 "dhgroup": "null", 00:11:45.666 "digest": "sha256", 00:11:45.666 "state": "completed" 00:11:45.666 }, 00:11:45.667 "cntlid": 1, 00:11:45.667 "listen_address": { 00:11:45.667 "adrfam": "IPv4", 00:11:45.667 "traddr": "10.0.0.2", 00:11:45.667 "trsvcid": "4420", 00:11:45.667 "trtype": "TCP" 00:11:45.667 }, 00:11:45.667 "peer_address": { 00:11:45.667 "adrfam": "IPv4", 00:11:45.667 "traddr": "10.0.0.1", 00:11:45.667 "trsvcid": "59104", 00:11:45.667 "trtype": "TCP" 00:11:45.667 }, 00:11:45.667 "qid": 0, 00:11:45.667 "state": "enabled", 00:11:45.667 "thread": "nvmf_tgt_poll_group_000" 00:11:45.667 } 00:11:45.667 ]' 00:11:45.667 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:45.924 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:45.924 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:45.924 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:45.924 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:45.924 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.925 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.925 16:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.183 16:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:00:YmMxOTE2ZGJmY2E3MzljYjIxYmI0MTZlMGYzZTBiYzZiMjMzN2M2ZmE3YzJlZTgz9JREJw==: --dhchap-ctrl-secret DHHC-1:03:OTQ1MGQ4ODRkOTkyMjFmN2QzMjJlMGM4OWU0YmEyMTA3MDMwZmEzNTg1ZTJhNjJhYmY1OWIyYTE0OTJjMjEwYxrxhaM=: 00:11:50.369 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.369 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:11:50.369 16:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.369 16:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.369 16:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.369 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:50.369 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:50.369 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:50.369 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:11:50.369 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:50.369 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:50.369 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:50.369 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:50.369 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.369 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:50.369 16:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.369 16:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.369 16:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.369 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:50.369 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:50.627 00:11:50.627 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:50.627 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:50.627 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.886 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.886 16:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.886 16:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.886 16:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.886 16:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.886 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:50.886 { 00:11:50.886 "auth": { 00:11:50.886 "dhgroup": "null", 00:11:50.886 "digest": "sha256", 00:11:50.886 "state": "completed" 00:11:50.886 }, 00:11:50.886 "cntlid": 3, 00:11:50.886 "listen_address": { 00:11:50.886 "adrfam": "IPv4", 00:11:50.886 "traddr": "10.0.0.2", 00:11:50.886 "trsvcid": "4420", 00:11:50.886 "trtype": "TCP" 00:11:50.886 }, 00:11:50.886 "peer_address": { 00:11:50.886 "adrfam": "IPv4", 00:11:50.886 "traddr": "10.0.0.1", 00:11:50.886 "trsvcid": "51166", 00:11:50.886 "trtype": "TCP" 00:11:50.886 }, 00:11:50.886 "qid": 0, 00:11:50.886 "state": "enabled", 00:11:50.886 "thread": "nvmf_tgt_poll_group_000" 00:11:50.886 } 00:11:50.886 ]' 00:11:50.887 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:50.887 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:50.887 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:51.145 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:51.145 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:51.145 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.145 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.145 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.404 16:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:01:MDljODhiMGM5MDdkZDA5NTUxYTc5ZWNlYjU3MmJlZmaPL6kQ: --dhchap-ctrl-secret DHHC-1:02:YTRjYzkwNGMxZTU2NjNmMzM2NTgwMDk5OTUyZmU0NDUzNWE5ZDgyZDYyY2E3OTIyAd/Y0g==: 00:11:51.969 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.969 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:11:51.969 16:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.969 16:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.969 16:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.969 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:51.969 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:51.969 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:52.226 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:11:52.226 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:52.226 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:52.226 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:52.226 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:52.226 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.226 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:52.226 16:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.226 16:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.226 16:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.227 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:52.227 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:52.793 00:11:52.793 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:52.793 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.793 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:52.793 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.793 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.793 16:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.793 16:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.793 16:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.793 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:52.793 { 00:11:52.793 "auth": { 00:11:52.793 "dhgroup": "null", 00:11:52.793 "digest": "sha256", 00:11:52.793 "state": "completed" 00:11:52.793 }, 00:11:52.793 "cntlid": 5, 00:11:52.793 "listen_address": { 00:11:52.793 "adrfam": "IPv4", 00:11:52.793 "traddr": "10.0.0.2", 00:11:52.793 "trsvcid": "4420", 00:11:52.793 "trtype": "TCP" 00:11:52.793 }, 00:11:52.793 "peer_address": { 00:11:52.793 "adrfam": "IPv4", 00:11:52.793 "traddr": "10.0.0.1", 00:11:52.793 "trsvcid": "51200", 00:11:52.793 "trtype": "TCP" 00:11:52.793 }, 00:11:52.793 "qid": 0, 00:11:52.793 "state": "enabled", 00:11:52.793 "thread": "nvmf_tgt_poll_group_000" 00:11:52.793 } 00:11:52.793 ]' 00:11:52.793 16:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:53.052 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:53.052 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:53.052 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:53.052 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:53.052 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.052 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.052 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.310 16:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:02:YjVhNjVjYzAzOWM0ZWU3N2FjN2ViODMwZDlhZGVjOGI1YTg4MTE1NjIxZTBhZTQzoDY2NA==: --dhchap-ctrl-secret DHHC-1:01:Y2U5YzFjZDgwMWFkNTc5Zjk4NzgzYzg5ZmIzY2JmMzPEv3z1: 00:11:54.245 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.245 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:11:54.245 16:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.245 16:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.245 16:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.245 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:54.245 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:54.245 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:54.507 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:11:54.507 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:54.507 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:54.507 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:54.507 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:54.507 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.507 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key3 00:11:54.507 16:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.507 16:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.507 16:27:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.507 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:54.507 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:54.781 00:11:54.781 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:54.781 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:54.781 16:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.055 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.055 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.055 16:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.055 16:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.055 16:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.055 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:55.055 { 00:11:55.055 "auth": { 00:11:55.055 "dhgroup": "null", 00:11:55.055 "digest": "sha256", 00:11:55.055 "state": "completed" 00:11:55.055 }, 00:11:55.055 "cntlid": 7, 00:11:55.055 "listen_address": { 00:11:55.055 "adrfam": "IPv4", 00:11:55.055 "traddr": "10.0.0.2", 00:11:55.055 "trsvcid": "4420", 00:11:55.055 "trtype": "TCP" 00:11:55.055 }, 00:11:55.055 "peer_address": { 00:11:55.055 "adrfam": "IPv4", 00:11:55.055 "traddr": "10.0.0.1", 00:11:55.055 "trsvcid": "51230", 00:11:55.055 "trtype": "TCP" 00:11:55.055 }, 00:11:55.055 "qid": 0, 00:11:55.055 "state": "enabled", 00:11:55.055 "thread": "nvmf_tgt_poll_group_000" 00:11:55.055 } 00:11:55.055 ]' 00:11:55.055 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:55.055 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:55.055 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:55.055 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:55.055 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:55.055 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.055 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.055 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.313 16:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:03:NGY0ZDQ5ZDgxNmE5MmFjMDU3ZDVlMTMyY2I1Y2E0NTk5NThmMmRmZDI3YWY2ODk1Y2JmNDc3YTI5YmFlYTU0M3NAE8c=: 00:11:55.880 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:55.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:55.880 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:11:55.880 16:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.880 16:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.880 16:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.880 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:55.880 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:55.880 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:55.880 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:56.138 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:11:56.138 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:56.138 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:56.138 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:56.138 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:56.138 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.138 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:56.138 16:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.138 16:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.138 16:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.138 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:56.138 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:56.396 00:11:56.396 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:56.396 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:56.396 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.654 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.654 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.654 16:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.654 16:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.912 16:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.912 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:56.912 { 00:11:56.912 "auth": { 00:11:56.912 "dhgroup": "ffdhe2048", 00:11:56.912 "digest": "sha256", 00:11:56.912 "state": "completed" 00:11:56.912 }, 00:11:56.912 "cntlid": 9, 00:11:56.912 "listen_address": { 00:11:56.912 "adrfam": "IPv4", 00:11:56.912 "traddr": "10.0.0.2", 00:11:56.912 "trsvcid": "4420", 00:11:56.912 "trtype": "TCP" 00:11:56.912 }, 00:11:56.912 "peer_address": { 00:11:56.912 "adrfam": "IPv4", 00:11:56.912 "traddr": "10.0.0.1", 00:11:56.912 "trsvcid": "51250", 00:11:56.912 "trtype": "TCP" 00:11:56.912 }, 00:11:56.912 "qid": 0, 00:11:56.912 "state": "enabled", 00:11:56.912 "thread": "nvmf_tgt_poll_group_000" 00:11:56.912 } 00:11:56.912 ]' 00:11:56.912 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:56.912 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:56.912 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:56.912 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:56.912 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:56.912 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.912 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.912 16:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.170 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:00:YmMxOTE2ZGJmY2E3MzljYjIxYmI0MTZlMGYzZTBiYzZiMjMzN2M2ZmE3YzJlZTgz9JREJw==: --dhchap-ctrl-secret DHHC-1:03:OTQ1MGQ4ODRkOTkyMjFmN2QzMjJlMGM4OWU0YmEyMTA3MDMwZmEzNTg1ZTJhNjJhYmY1OWIyYTE0OTJjMjEwYxrxhaM=: 00:11:58.104 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.104 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:11:58.104 16:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.104 16:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.104 16:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.104 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:58.104 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:58.104 16:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:58.104 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:11:58.104 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:58.104 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:58.104 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:58.104 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:58.104 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.104 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:58.104 16:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.104 16:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.104 16:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.104 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:58.104 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:58.362 00:11:58.362 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:58.362 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.362 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:58.928 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.928 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:58.928 16:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.928 16:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.928 16:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.928 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:58.928 { 00:11:58.928 "auth": { 00:11:58.928 "dhgroup": "ffdhe2048", 00:11:58.928 "digest": "sha256", 00:11:58.928 "state": "completed" 00:11:58.928 }, 00:11:58.928 "cntlid": 11, 00:11:58.928 "listen_address": { 00:11:58.928 "adrfam": "IPv4", 00:11:58.928 "traddr": "10.0.0.2", 00:11:58.928 "trsvcid": "4420", 00:11:58.928 "trtype": "TCP" 00:11:58.928 }, 00:11:58.928 "peer_address": { 00:11:58.928 "adrfam": "IPv4", 00:11:58.928 "traddr": "10.0.0.1", 00:11:58.928 "trsvcid": "50740", 00:11:58.928 "trtype": "TCP" 00:11:58.928 }, 00:11:58.928 "qid": 0, 00:11:58.928 "state": "enabled", 00:11:58.928 "thread": "nvmf_tgt_poll_group_000" 00:11:58.928 } 00:11:58.928 ]' 00:11:58.928 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:58.928 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:58.928 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:58.928 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:58.928 16:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:58.928 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.928 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.929 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.187 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:01:MDljODhiMGM5MDdkZDA5NTUxYTc5ZWNlYjU3MmJlZmaPL6kQ: --dhchap-ctrl-secret DHHC-1:02:YTRjYzkwNGMxZTU2NjNmMzM2NTgwMDk5OTUyZmU0NDUzNWE5ZDgyZDYyY2E3OTIyAd/Y0g==: 00:12:00.121 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.121 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:00.121 16:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.121 16:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.121 16:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.121 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:00.121 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:00.121 16:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:00.121 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:12:00.121 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:00.121 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:00.121 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:00.121 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:00.121 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.121 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:00.121 16:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.121 16:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.121 16:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.121 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:00.121 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:00.379 00:12:00.379 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:00.379 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:00.379 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:00.636 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.636 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.636 16:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.636 16:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.636 16:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.636 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:00.636 { 00:12:00.636 "auth": { 00:12:00.636 "dhgroup": "ffdhe2048", 00:12:00.636 "digest": "sha256", 00:12:00.636 "state": "completed" 00:12:00.636 }, 00:12:00.636 "cntlid": 13, 00:12:00.636 "listen_address": { 00:12:00.636 "adrfam": "IPv4", 00:12:00.636 "traddr": "10.0.0.2", 00:12:00.636 "trsvcid": "4420", 00:12:00.636 "trtype": "TCP" 00:12:00.636 }, 00:12:00.636 "peer_address": { 00:12:00.636 "adrfam": "IPv4", 00:12:00.636 "traddr": "10.0.0.1", 00:12:00.636 "trsvcid": "50774", 00:12:00.636 "trtype": "TCP" 00:12:00.636 }, 00:12:00.636 "qid": 0, 00:12:00.636 "state": "enabled", 00:12:00.636 "thread": "nvmf_tgt_poll_group_000" 00:12:00.636 } 00:12:00.636 ]' 00:12:00.636 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:00.894 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:00.894 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:00.894 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:00.894 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:00.894 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.894 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.894 16:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.152 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:02:YjVhNjVjYzAzOWM0ZWU3N2FjN2ViODMwZDlhZGVjOGI1YTg4MTE1NjIxZTBhZTQzoDY2NA==: --dhchap-ctrl-secret DHHC-1:01:Y2U5YzFjZDgwMWFkNTc5Zjk4NzgzYzg5ZmIzY2JmMzPEv3z1: 00:12:01.741 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.741 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:01.741 16:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.741 16:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.741 16:27:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.741 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:01.741 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:01.741 16:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:01.999 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:12:01.999 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:01.999 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:01.999 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:01.999 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:01.999 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.999 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key3 00:12:01.999 16:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.999 16:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.999 16:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.999 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:01.999 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:02.257 00:12:02.515 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:02.515 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:02.515 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.772 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.772 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.772 16:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.772 16:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.772 16:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.772 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:02.772 { 00:12:02.772 "auth": { 00:12:02.772 "dhgroup": "ffdhe2048", 00:12:02.772 "digest": "sha256", 00:12:02.772 "state": "completed" 00:12:02.772 }, 00:12:02.772 "cntlid": 15, 00:12:02.772 "listen_address": { 00:12:02.772 "adrfam": "IPv4", 00:12:02.772 "traddr": "10.0.0.2", 00:12:02.772 "trsvcid": "4420", 00:12:02.772 "trtype": "TCP" 00:12:02.772 }, 00:12:02.772 "peer_address": { 00:12:02.772 "adrfam": "IPv4", 00:12:02.772 "traddr": "10.0.0.1", 00:12:02.772 "trsvcid": "50804", 00:12:02.772 "trtype": "TCP" 00:12:02.772 }, 00:12:02.772 "qid": 0, 00:12:02.772 "state": "enabled", 00:12:02.772 "thread": "nvmf_tgt_poll_group_000" 00:12:02.772 } 00:12:02.772 ]' 00:12:02.772 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:02.772 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:02.772 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:02.772 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:02.772 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:02.772 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.772 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.772 16:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.029 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:03:NGY0ZDQ5ZDgxNmE5MmFjMDU3ZDVlMTMyY2I1Y2E0NTk5NThmMmRmZDI3YWY2ODk1Y2JmNDc3YTI5YmFlYTU0M3NAE8c=: 00:12:03.594 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.594 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:03.594 16:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.594 16:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.852 16:27:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.852 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:03.852 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:03.852 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:03.852 16:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:03.852 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:12:03.852 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:03.852 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:03.852 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:03.852 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:03.852 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.852 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.852 16:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.852 16:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.852 16:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.852 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.852 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.419 00:12:04.419 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:04.419 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.419 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:04.419 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.419 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.419 16:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.419 16:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.677 16:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.677 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:04.677 { 00:12:04.677 "auth": { 00:12:04.677 "dhgroup": "ffdhe3072", 00:12:04.677 "digest": "sha256", 00:12:04.677 "state": "completed" 00:12:04.677 }, 00:12:04.677 "cntlid": 17, 00:12:04.677 "listen_address": { 00:12:04.677 "adrfam": "IPv4", 00:12:04.677 "traddr": "10.0.0.2", 00:12:04.677 "trsvcid": "4420", 00:12:04.677 "trtype": "TCP" 00:12:04.677 }, 00:12:04.677 "peer_address": { 00:12:04.677 "adrfam": "IPv4", 00:12:04.677 "traddr": "10.0.0.1", 00:12:04.677 "trsvcid": "50844", 00:12:04.677 "trtype": "TCP" 00:12:04.677 }, 00:12:04.677 "qid": 0, 00:12:04.677 "state": "enabled", 00:12:04.677 "thread": "nvmf_tgt_poll_group_000" 00:12:04.677 } 00:12:04.677 ]' 00:12:04.677 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:04.677 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:04.677 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:04.677 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:04.677 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:04.677 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.677 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.677 16:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.935 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:00:YmMxOTE2ZGJmY2E3MzljYjIxYmI0MTZlMGYzZTBiYzZiMjMzN2M2ZmE3YzJlZTgz9JREJw==: --dhchap-ctrl-secret DHHC-1:03:OTQ1MGQ4ODRkOTkyMjFmN2QzMjJlMGM4OWU0YmEyMTA3MDMwZmEzNTg1ZTJhNjJhYmY1OWIyYTE0OTJjMjEwYxrxhaM=: 00:12:05.869 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.869 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:05.869 16:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.869 16:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.869 16:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.869 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:05.869 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:05.869 16:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:06.127 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:12:06.128 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:06.128 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:06.128 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:06.128 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:06.128 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.128 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.128 16:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.128 16:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.128 16:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.128 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.128 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.386 00:12:06.386 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:06.386 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.386 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:06.644 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.644 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.644 16:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.644 16:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.644 16:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.644 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:06.644 { 00:12:06.644 "auth": { 00:12:06.644 "dhgroup": "ffdhe3072", 00:12:06.644 "digest": "sha256", 00:12:06.644 "state": "completed" 00:12:06.644 }, 00:12:06.644 "cntlid": 19, 00:12:06.644 "listen_address": { 00:12:06.644 "adrfam": "IPv4", 00:12:06.644 "traddr": "10.0.0.2", 00:12:06.644 "trsvcid": "4420", 00:12:06.644 "trtype": "TCP" 00:12:06.644 }, 00:12:06.644 "peer_address": { 00:12:06.644 "adrfam": "IPv4", 00:12:06.644 "traddr": "10.0.0.1", 00:12:06.644 "trsvcid": "50864", 00:12:06.644 "trtype": "TCP" 00:12:06.644 }, 00:12:06.644 "qid": 0, 00:12:06.644 "state": "enabled", 00:12:06.644 "thread": "nvmf_tgt_poll_group_000" 00:12:06.644 } 00:12:06.644 ]' 00:12:06.644 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:06.644 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:06.644 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:06.644 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:06.644 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:06.905 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.905 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.905 16:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.167 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:01:MDljODhiMGM5MDdkZDA5NTUxYTc5ZWNlYjU3MmJlZmaPL6kQ: --dhchap-ctrl-secret DHHC-1:02:YTRjYzkwNGMxZTU2NjNmMzM2NTgwMDk5OTUyZmU0NDUzNWE5ZDgyZDYyY2E3OTIyAd/Y0g==: 00:12:07.732 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.732 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:07.732 16:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.732 16:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.732 16:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.732 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:07.732 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:07.732 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:07.990 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:12:07.990 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:07.990 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:07.990 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:07.990 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:07.990 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.990 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.990 16:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.990 16:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.990 16:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.990 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.990 16:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.248 00:12:08.248 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:08.248 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:08.248 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.505 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.505 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.505 16:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.505 16:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.505 16:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.506 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:08.506 { 00:12:08.506 "auth": { 00:12:08.506 "dhgroup": "ffdhe3072", 00:12:08.506 "digest": "sha256", 00:12:08.506 "state": "completed" 00:12:08.506 }, 00:12:08.506 "cntlid": 21, 00:12:08.506 "listen_address": { 00:12:08.506 "adrfam": "IPv4", 00:12:08.506 "traddr": "10.0.0.2", 00:12:08.506 "trsvcid": "4420", 00:12:08.506 "trtype": "TCP" 00:12:08.506 }, 00:12:08.506 "peer_address": { 00:12:08.506 "adrfam": "IPv4", 00:12:08.506 "traddr": "10.0.0.1", 00:12:08.506 "trsvcid": "50874", 00:12:08.506 "trtype": "TCP" 00:12:08.506 }, 00:12:08.506 "qid": 0, 00:12:08.506 "state": "enabled", 00:12:08.506 "thread": "nvmf_tgt_poll_group_000" 00:12:08.506 } 00:12:08.506 ]' 00:12:08.506 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:08.506 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:08.506 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:08.506 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:08.506 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:08.506 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.506 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.506 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.764 16:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:02:YjVhNjVjYzAzOWM0ZWU3N2FjN2ViODMwZDlhZGVjOGI1YTg4MTE1NjIxZTBhZTQzoDY2NA==: --dhchap-ctrl-secret DHHC-1:01:Y2U5YzFjZDgwMWFkNTc5Zjk4NzgzYzg5ZmIzY2JmMzPEv3z1: 00:12:09.347 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.605 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:09.605 16:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.605 16:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.605 16:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.605 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:09.605 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:09.605 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:09.605 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:12:09.605 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:09.605 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:09.605 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:09.605 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:09.605 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.605 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key3 00:12:09.605 16:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.605 16:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.605 16:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.605 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:09.605 16:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:10.170 00:12:10.170 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:10.170 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:10.170 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.170 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.170 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.170 16:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.170 16:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.170 16:27:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.170 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:10.170 { 00:12:10.170 "auth": { 00:12:10.170 "dhgroup": "ffdhe3072", 00:12:10.170 "digest": "sha256", 00:12:10.170 "state": "completed" 00:12:10.170 }, 00:12:10.170 "cntlid": 23, 00:12:10.170 "listen_address": { 00:12:10.170 "adrfam": "IPv4", 00:12:10.170 "traddr": "10.0.0.2", 00:12:10.170 "trsvcid": "4420", 00:12:10.170 "trtype": "TCP" 00:12:10.170 }, 00:12:10.170 "peer_address": { 00:12:10.170 "adrfam": "IPv4", 00:12:10.170 "traddr": "10.0.0.1", 00:12:10.170 "trsvcid": "44364", 00:12:10.170 "trtype": "TCP" 00:12:10.170 }, 00:12:10.170 "qid": 0, 00:12:10.170 "state": "enabled", 00:12:10.170 "thread": "nvmf_tgt_poll_group_000" 00:12:10.170 } 00:12:10.170 ]' 00:12:10.170 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:10.428 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:10.428 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:10.428 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:10.428 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:10.428 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.428 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.428 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.685 16:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:03:NGY0ZDQ5ZDgxNmE5MmFjMDU3ZDVlMTMyY2I1Y2E0NTk5NThmMmRmZDI3YWY2ODk1Y2JmNDc3YTI5YmFlYTU0M3NAE8c=: 00:12:11.263 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.521 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:11.521 16:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.521 16:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.521 16:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.521 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:11.521 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:11.521 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:11.521 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:11.778 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:12:11.779 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:11.779 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:11.779 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:11.779 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:11.779 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.779 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.779 16:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.779 16:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.779 16:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.779 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.779 16:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:12.036 00:12:12.036 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:12.036 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.036 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:12.294 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.294 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.294 16:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.294 16:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.294 16:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.294 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:12.294 { 00:12:12.294 "auth": { 00:12:12.294 "dhgroup": "ffdhe4096", 00:12:12.294 "digest": "sha256", 00:12:12.294 "state": "completed" 00:12:12.294 }, 00:12:12.294 "cntlid": 25, 00:12:12.294 "listen_address": { 00:12:12.294 "adrfam": "IPv4", 00:12:12.294 "traddr": "10.0.0.2", 00:12:12.294 "trsvcid": "4420", 00:12:12.294 "trtype": "TCP" 00:12:12.294 }, 00:12:12.294 "peer_address": { 00:12:12.294 "adrfam": "IPv4", 00:12:12.294 "traddr": "10.0.0.1", 00:12:12.294 "trsvcid": "44384", 00:12:12.294 "trtype": "TCP" 00:12:12.294 }, 00:12:12.294 "qid": 0, 00:12:12.294 "state": "enabled", 00:12:12.294 "thread": "nvmf_tgt_poll_group_000" 00:12:12.294 } 00:12:12.294 ]' 00:12:12.294 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:12.294 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:12.294 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:12.294 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:12.294 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:12.552 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.552 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.552 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.552 16:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:00:YmMxOTE2ZGJmY2E3MzljYjIxYmI0MTZlMGYzZTBiYzZiMjMzN2M2ZmE3YzJlZTgz9JREJw==: --dhchap-ctrl-secret DHHC-1:03:OTQ1MGQ4ODRkOTkyMjFmN2QzMjJlMGM4OWU0YmEyMTA3MDMwZmEzNTg1ZTJhNjJhYmY1OWIyYTE0OTJjMjEwYxrxhaM=: 00:12:13.487 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.488 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:13.488 16:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.488 16:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.488 16:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.488 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:13.488 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:13.488 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:13.488 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:12:13.488 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:13.488 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:13.488 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:13.488 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:13.488 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.488 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.488 16:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.488 16:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.488 16:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.488 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.488 16:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.054 00:12:14.054 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:14.054 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:14.054 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.313 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.313 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.313 16:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.313 16:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.313 16:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.313 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:14.313 { 00:12:14.313 "auth": { 00:12:14.313 "dhgroup": "ffdhe4096", 00:12:14.313 "digest": "sha256", 00:12:14.313 "state": "completed" 00:12:14.313 }, 00:12:14.313 "cntlid": 27, 00:12:14.313 "listen_address": { 00:12:14.313 "adrfam": "IPv4", 00:12:14.313 "traddr": "10.0.0.2", 00:12:14.313 "trsvcid": "4420", 00:12:14.313 "trtype": "TCP" 00:12:14.313 }, 00:12:14.313 "peer_address": { 00:12:14.313 "adrfam": "IPv4", 00:12:14.313 "traddr": "10.0.0.1", 00:12:14.313 "trsvcid": "44408", 00:12:14.313 "trtype": "TCP" 00:12:14.313 }, 00:12:14.313 "qid": 0, 00:12:14.313 "state": "enabled", 00:12:14.313 "thread": "nvmf_tgt_poll_group_000" 00:12:14.313 } 00:12:14.313 ]' 00:12:14.313 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:14.313 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:14.313 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:14.313 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:14.313 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:14.313 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.313 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.313 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.572 16:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:01:MDljODhiMGM5MDdkZDA5NTUxYTc5ZWNlYjU3MmJlZmaPL6kQ: --dhchap-ctrl-secret DHHC-1:02:YTRjYzkwNGMxZTU2NjNmMzM2NTgwMDk5OTUyZmU0NDUzNWE5ZDgyZDYyY2E3OTIyAd/Y0g==: 00:12:15.138 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.138 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:15.138 16:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.138 16:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.138 16:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.138 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:15.138 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:15.138 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:15.396 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:12:15.396 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:15.396 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:15.396 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:15.396 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:15.396 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.396 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.396 16:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.396 16:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.653 16:27:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.653 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.653 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.911 00:12:15.911 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:15.911 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:15.911 16:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.169 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.169 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.169 16:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.169 16:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.169 16:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.169 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:16.169 { 00:12:16.169 "auth": { 00:12:16.169 "dhgroup": "ffdhe4096", 00:12:16.169 "digest": "sha256", 00:12:16.169 "state": "completed" 00:12:16.169 }, 00:12:16.169 "cntlid": 29, 00:12:16.169 "listen_address": { 00:12:16.169 "adrfam": "IPv4", 00:12:16.169 "traddr": "10.0.0.2", 00:12:16.169 "trsvcid": "4420", 00:12:16.169 "trtype": "TCP" 00:12:16.169 }, 00:12:16.169 "peer_address": { 00:12:16.169 "adrfam": "IPv4", 00:12:16.169 "traddr": "10.0.0.1", 00:12:16.169 "trsvcid": "44452", 00:12:16.169 "trtype": "TCP" 00:12:16.169 }, 00:12:16.169 "qid": 0, 00:12:16.169 "state": "enabled", 00:12:16.169 "thread": "nvmf_tgt_poll_group_000" 00:12:16.169 } 00:12:16.169 ]' 00:12:16.169 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:16.169 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:16.169 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:16.169 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:16.169 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:16.169 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.169 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.169 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.427 16:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:02:YjVhNjVjYzAzOWM0ZWU3N2FjN2ViODMwZDlhZGVjOGI1YTg4MTE1NjIxZTBhZTQzoDY2NA==: --dhchap-ctrl-secret DHHC-1:01:Y2U5YzFjZDgwMWFkNTc5Zjk4NzgzYzg5ZmIzY2JmMzPEv3z1: 00:12:17.363 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.363 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:17.363 16:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.363 16:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.363 16:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.363 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:17.364 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:17.364 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:17.364 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:12:17.364 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:17.364 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:17.364 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:17.364 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:17.364 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.364 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key3 00:12:17.364 16:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.364 16:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.622 16:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.622 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:17.622 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:17.879 00:12:17.880 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:17.880 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.880 16:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:18.137 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.137 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.137 16:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.137 16:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.137 16:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.137 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:18.137 { 00:12:18.137 "auth": { 00:12:18.138 "dhgroup": "ffdhe4096", 00:12:18.138 "digest": "sha256", 00:12:18.138 "state": "completed" 00:12:18.138 }, 00:12:18.138 "cntlid": 31, 00:12:18.138 "listen_address": { 00:12:18.138 "adrfam": "IPv4", 00:12:18.138 "traddr": "10.0.0.2", 00:12:18.138 "trsvcid": "4420", 00:12:18.138 "trtype": "TCP" 00:12:18.138 }, 00:12:18.138 "peer_address": { 00:12:18.138 "adrfam": "IPv4", 00:12:18.138 "traddr": "10.0.0.1", 00:12:18.138 "trsvcid": "44486", 00:12:18.138 "trtype": "TCP" 00:12:18.138 }, 00:12:18.138 "qid": 0, 00:12:18.138 "state": "enabled", 00:12:18.138 "thread": "nvmf_tgt_poll_group_000" 00:12:18.138 } 00:12:18.138 ]' 00:12:18.138 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:18.138 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:18.138 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:18.138 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:18.138 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:18.138 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.138 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.138 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.395 16:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:03:NGY0ZDQ5ZDgxNmE5MmFjMDU3ZDVlMTMyY2I1Y2E0NTk5NThmMmRmZDI3YWY2ODk1Y2JmNDc3YTI5YmFlYTU0M3NAE8c=: 00:12:19.330 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.330 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:19.330 16:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.330 16:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.330 16:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.330 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:19.330 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:19.330 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:19.330 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:19.330 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:12:19.330 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:19.330 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:19.330 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:19.330 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:19.330 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.330 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:19.330 16:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.330 16:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.330 16:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.330 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:19.330 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:19.898 00:12:19.898 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:19.898 16:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:19.898 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.157 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.157 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.157 16:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.157 16:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.157 16:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.157 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:20.157 { 00:12:20.157 "auth": { 00:12:20.157 "dhgroup": "ffdhe6144", 00:12:20.157 "digest": "sha256", 00:12:20.157 "state": "completed" 00:12:20.157 }, 00:12:20.157 "cntlid": 33, 00:12:20.157 "listen_address": { 00:12:20.157 "adrfam": "IPv4", 00:12:20.157 "traddr": "10.0.0.2", 00:12:20.157 "trsvcid": "4420", 00:12:20.157 "trtype": "TCP" 00:12:20.157 }, 00:12:20.157 "peer_address": { 00:12:20.157 "adrfam": "IPv4", 00:12:20.157 "traddr": "10.0.0.1", 00:12:20.157 "trsvcid": "46130", 00:12:20.157 "trtype": "TCP" 00:12:20.157 }, 00:12:20.157 "qid": 0, 00:12:20.157 "state": "enabled", 00:12:20.157 "thread": "nvmf_tgt_poll_group_000" 00:12:20.157 } 00:12:20.157 ]' 00:12:20.157 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:20.157 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:20.157 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:20.436 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:20.436 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:20.436 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.436 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.436 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.702 16:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:00:YmMxOTE2ZGJmY2E3MzljYjIxYmI0MTZlMGYzZTBiYzZiMjMzN2M2ZmE3YzJlZTgz9JREJw==: --dhchap-ctrl-secret DHHC-1:03:OTQ1MGQ4ODRkOTkyMjFmN2QzMjJlMGM4OWU0YmEyMTA3MDMwZmEzNTg1ZTJhNjJhYmY1OWIyYTE0OTJjMjEwYxrxhaM=: 00:12:21.271 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.271 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:21.271 16:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.271 16:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.271 16:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.271 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:21.271 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:21.271 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:21.530 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:12:21.530 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:21.530 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:21.530 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:21.530 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:21.530 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.530 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.530 16:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.530 16:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.530 16:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.530 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.530 16:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.097 00:12:22.097 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:22.097 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:22.097 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.355 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.355 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.355 16:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.355 16:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.355 16:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.355 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:22.355 { 00:12:22.355 "auth": { 00:12:22.355 "dhgroup": "ffdhe6144", 00:12:22.355 "digest": "sha256", 00:12:22.355 "state": "completed" 00:12:22.355 }, 00:12:22.355 "cntlid": 35, 00:12:22.355 "listen_address": { 00:12:22.355 "adrfam": "IPv4", 00:12:22.355 "traddr": "10.0.0.2", 00:12:22.355 "trsvcid": "4420", 00:12:22.355 "trtype": "TCP" 00:12:22.355 }, 00:12:22.355 "peer_address": { 00:12:22.355 "adrfam": "IPv4", 00:12:22.355 "traddr": "10.0.0.1", 00:12:22.355 "trsvcid": "46156", 00:12:22.355 "trtype": "TCP" 00:12:22.355 }, 00:12:22.355 "qid": 0, 00:12:22.355 "state": "enabled", 00:12:22.355 "thread": "nvmf_tgt_poll_group_000" 00:12:22.355 } 00:12:22.355 ]' 00:12:22.355 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:22.355 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:22.355 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:22.355 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:22.355 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:22.614 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.614 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.614 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.614 16:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:01:MDljODhiMGM5MDdkZDA5NTUxYTc5ZWNlYjU3MmJlZmaPL6kQ: --dhchap-ctrl-secret DHHC-1:02:YTRjYzkwNGMxZTU2NjNmMzM2NTgwMDk5OTUyZmU0NDUzNWE5ZDgyZDYyY2E3OTIyAd/Y0g==: 00:12:23.181 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.181 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:23.181 16:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.181 16:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.181 16:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.181 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:23.181 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:23.181 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:23.440 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:12:23.440 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:23.440 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:23.440 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:23.440 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:23.440 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.440 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:23.440 16:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.440 16:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.440 16:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.440 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:23.440 16:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.007 00:12:24.007 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:24.007 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.007 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:24.264 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.264 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.264 16:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.264 16:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.264 16:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.264 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:24.264 { 00:12:24.264 "auth": { 00:12:24.264 "dhgroup": "ffdhe6144", 00:12:24.264 "digest": "sha256", 00:12:24.264 "state": "completed" 00:12:24.264 }, 00:12:24.264 "cntlid": 37, 00:12:24.264 "listen_address": { 00:12:24.264 "adrfam": "IPv4", 00:12:24.264 "traddr": "10.0.0.2", 00:12:24.264 "trsvcid": "4420", 00:12:24.264 "trtype": "TCP" 00:12:24.264 }, 00:12:24.264 "peer_address": { 00:12:24.264 "adrfam": "IPv4", 00:12:24.264 "traddr": "10.0.0.1", 00:12:24.264 "trsvcid": "46172", 00:12:24.264 "trtype": "TCP" 00:12:24.264 }, 00:12:24.264 "qid": 0, 00:12:24.264 "state": "enabled", 00:12:24.264 "thread": "nvmf_tgt_poll_group_000" 00:12:24.264 } 00:12:24.264 ]' 00:12:24.264 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:24.264 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:24.264 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:24.521 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:24.521 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:24.521 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.521 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.521 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.778 16:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:02:YjVhNjVjYzAzOWM0ZWU3N2FjN2ViODMwZDlhZGVjOGI1YTg4MTE1NjIxZTBhZTQzoDY2NA==: --dhchap-ctrl-secret DHHC-1:01:Y2U5YzFjZDgwMWFkNTc5Zjk4NzgzYzg5ZmIzY2JmMzPEv3z1: 00:12:25.342 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.342 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:25.342 16:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.342 16:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.342 16:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.342 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:25.342 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:25.342 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:25.599 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:12:25.599 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:25.599 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:25.599 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:25.599 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:25.599 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.599 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key3 00:12:25.599 16:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.599 16:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.599 16:27:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.599 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:25.599 16:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:26.164 00:12:26.164 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:26.164 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:26.164 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.421 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.421 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.421 16:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.421 16:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.421 16:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.421 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:26.421 { 00:12:26.421 "auth": { 00:12:26.421 "dhgroup": "ffdhe6144", 00:12:26.421 "digest": "sha256", 00:12:26.421 "state": "completed" 00:12:26.421 }, 00:12:26.421 "cntlid": 39, 00:12:26.421 "listen_address": { 00:12:26.421 "adrfam": "IPv4", 00:12:26.421 "traddr": "10.0.0.2", 00:12:26.421 "trsvcid": "4420", 00:12:26.421 "trtype": "TCP" 00:12:26.421 }, 00:12:26.421 "peer_address": { 00:12:26.421 "adrfam": "IPv4", 00:12:26.421 "traddr": "10.0.0.1", 00:12:26.421 "trsvcid": "46198", 00:12:26.421 "trtype": "TCP" 00:12:26.421 }, 00:12:26.421 "qid": 0, 00:12:26.421 "state": "enabled", 00:12:26.421 "thread": "nvmf_tgt_poll_group_000" 00:12:26.421 } 00:12:26.421 ]' 00:12:26.421 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:26.421 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:26.421 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:26.421 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:26.421 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:26.421 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.421 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.421 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.679 16:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:03:NGY0ZDQ5ZDgxNmE5MmFjMDU3ZDVlMTMyY2I1Y2E0NTk5NThmMmRmZDI3YWY2ODk1Y2JmNDc3YTI5YmFlYTU0M3NAE8c=: 00:12:27.610 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.610 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:27.610 16:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.610 16:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.610 16:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.610 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:27.610 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:27.610 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:27.610 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:27.610 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:12:27.610 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:27.610 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:27.610 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:27.610 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:27.610 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.610 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.610 16:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.610 16:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.868 16:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.868 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:27.868 16:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.433 00:12:28.433 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:28.433 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.433 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:28.433 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.433 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.433 16:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.433 16:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.433 16:27:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.433 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:28.433 { 00:12:28.433 "auth": { 00:12:28.433 "dhgroup": "ffdhe8192", 00:12:28.433 "digest": "sha256", 00:12:28.433 "state": "completed" 00:12:28.433 }, 00:12:28.433 "cntlid": 41, 00:12:28.433 "listen_address": { 00:12:28.433 "adrfam": "IPv4", 00:12:28.433 "traddr": "10.0.0.2", 00:12:28.433 "trsvcid": "4420", 00:12:28.433 "trtype": "TCP" 00:12:28.433 }, 00:12:28.433 "peer_address": { 00:12:28.434 "adrfam": "IPv4", 00:12:28.434 "traddr": "10.0.0.1", 00:12:28.434 "trsvcid": "46212", 00:12:28.434 "trtype": "TCP" 00:12:28.434 }, 00:12:28.434 "qid": 0, 00:12:28.434 "state": "enabled", 00:12:28.434 "thread": "nvmf_tgt_poll_group_000" 00:12:28.434 } 00:12:28.434 ]' 00:12:28.434 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:28.692 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:28.692 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:28.692 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:28.692 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:28.692 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.692 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.692 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.958 16:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:00:YmMxOTE2ZGJmY2E3MzljYjIxYmI0MTZlMGYzZTBiYzZiMjMzN2M2ZmE3YzJlZTgz9JREJw==: --dhchap-ctrl-secret DHHC-1:03:OTQ1MGQ4ODRkOTkyMjFmN2QzMjJlMGM4OWU0YmEyMTA3MDMwZmEzNTg1ZTJhNjJhYmY1OWIyYTE0OTJjMjEwYxrxhaM=: 00:12:29.525 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.525 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:29.525 16:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.525 16:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.525 16:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.525 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:29.525 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:29.525 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:29.783 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:12:29.783 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:29.783 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:29.783 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:29.783 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:29.783 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.783 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.783 16:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.783 16:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.783 16:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.783 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:29.783 16:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.349 00:12:30.349 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:30.350 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.350 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:30.350 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.350 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.350 16:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.350 16:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.350 16:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.350 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:30.350 { 00:12:30.350 "auth": { 00:12:30.350 "dhgroup": "ffdhe8192", 00:12:30.350 "digest": "sha256", 00:12:30.350 "state": "completed" 00:12:30.350 }, 00:12:30.350 "cntlid": 43, 00:12:30.350 "listen_address": { 00:12:30.350 "adrfam": "IPv4", 00:12:30.350 "traddr": "10.0.0.2", 00:12:30.350 "trsvcid": "4420", 00:12:30.350 "trtype": "TCP" 00:12:30.350 }, 00:12:30.350 "peer_address": { 00:12:30.350 "adrfam": "IPv4", 00:12:30.350 "traddr": "10.0.0.1", 00:12:30.350 "trsvcid": "50722", 00:12:30.350 "trtype": "TCP" 00:12:30.350 }, 00:12:30.350 "qid": 0, 00:12:30.350 "state": "enabled", 00:12:30.350 "thread": "nvmf_tgt_poll_group_000" 00:12:30.350 } 00:12:30.350 ]' 00:12:30.350 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:30.608 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:30.608 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:30.608 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:30.608 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:30.608 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.608 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.608 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.866 16:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:01:MDljODhiMGM5MDdkZDA5NTUxYTc5ZWNlYjU3MmJlZmaPL6kQ: --dhchap-ctrl-secret DHHC-1:02:YTRjYzkwNGMxZTU2NjNmMzM2NTgwMDk5OTUyZmU0NDUzNWE5ZDgyZDYyY2E3OTIyAd/Y0g==: 00:12:31.430 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.688 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:31.688 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.688 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.688 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.688 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:31.688 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:31.688 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:31.947 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:12:31.947 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:31.947 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:31.947 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:31.947 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:31.947 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.947 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.947 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.947 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.947 16:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.947 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:31.947 16:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:32.512 00:12:32.512 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:32.512 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:32.512 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.770 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.770 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.771 16:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.771 16:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.771 16:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.771 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:32.771 { 00:12:32.771 "auth": { 00:12:32.771 "dhgroup": "ffdhe8192", 00:12:32.771 "digest": "sha256", 00:12:32.771 "state": "completed" 00:12:32.771 }, 00:12:32.771 "cntlid": 45, 00:12:32.771 "listen_address": { 00:12:32.771 "adrfam": "IPv4", 00:12:32.771 "traddr": "10.0.0.2", 00:12:32.771 "trsvcid": "4420", 00:12:32.771 "trtype": "TCP" 00:12:32.771 }, 00:12:32.771 "peer_address": { 00:12:32.771 "adrfam": "IPv4", 00:12:32.771 "traddr": "10.0.0.1", 00:12:32.771 "trsvcid": "50752", 00:12:32.771 "trtype": "TCP" 00:12:32.771 }, 00:12:32.771 "qid": 0, 00:12:32.771 "state": "enabled", 00:12:32.771 "thread": "nvmf_tgt_poll_group_000" 00:12:32.771 } 00:12:32.771 ]' 00:12:32.771 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:32.771 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:32.771 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:32.771 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:32.771 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:32.771 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.771 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.771 16:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.028 16:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:02:YjVhNjVjYzAzOWM0ZWU3N2FjN2ViODMwZDlhZGVjOGI1YTg4MTE1NjIxZTBhZTQzoDY2NA==: --dhchap-ctrl-secret DHHC-1:01:Y2U5YzFjZDgwMWFkNTc5Zjk4NzgzYzg5ZmIzY2JmMzPEv3z1: 00:12:33.594 16:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.594 16:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:33.594 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.594 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.594 16:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.594 16:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:33.594 16:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:33.594 16:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:33.864 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:12:33.864 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:33.864 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:33.864 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:33.864 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:33.864 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.864 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key3 00:12:33.864 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.864 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.864 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.864 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:33.864 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:34.453 00:12:34.453 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:34.453 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.453 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:34.711 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.711 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.711 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.711 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.711 16:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.711 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:34.711 { 00:12:34.711 "auth": { 00:12:34.711 "dhgroup": "ffdhe8192", 00:12:34.711 "digest": "sha256", 00:12:34.711 "state": "completed" 00:12:34.711 }, 00:12:34.711 "cntlid": 47, 00:12:34.711 "listen_address": { 00:12:34.711 "adrfam": "IPv4", 00:12:34.711 "traddr": "10.0.0.2", 00:12:34.711 "trsvcid": "4420", 00:12:34.711 "trtype": "TCP" 00:12:34.711 }, 00:12:34.711 "peer_address": { 00:12:34.711 "adrfam": "IPv4", 00:12:34.711 "traddr": "10.0.0.1", 00:12:34.711 "trsvcid": "50776", 00:12:34.711 "trtype": "TCP" 00:12:34.711 }, 00:12:34.711 "qid": 0, 00:12:34.711 "state": "enabled", 00:12:34.711 "thread": "nvmf_tgt_poll_group_000" 00:12:34.711 } 00:12:34.711 ]' 00:12:34.711 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:34.711 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:34.711 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:34.968 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:34.968 16:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:34.968 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.969 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.969 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.226 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:03:NGY0ZDQ5ZDgxNmE5MmFjMDU3ZDVlMTMyY2I1Y2E0NTk5NThmMmRmZDI3YWY2ODk1Y2JmNDc3YTI5YmFlYTU0M3NAE8c=: 00:12:35.789 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.789 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:35.789 16:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.789 16:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.789 16:27:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.789 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:35.789 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:35.789 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:35.789 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:35.789 16:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:36.046 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:12:36.046 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:36.046 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:36.046 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:36.046 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:36.046 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.046 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.046 16:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.046 16:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.046 16:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.046 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.046 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.303 00:12:36.303 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:36.303 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.303 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:36.559 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.559 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.559 16:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.559 16:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.816 16:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.816 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:36.816 { 00:12:36.816 "auth": { 00:12:36.816 "dhgroup": "null", 00:12:36.816 "digest": "sha384", 00:12:36.816 "state": "completed" 00:12:36.816 }, 00:12:36.816 "cntlid": 49, 00:12:36.816 "listen_address": { 00:12:36.816 "adrfam": "IPv4", 00:12:36.816 "traddr": "10.0.0.2", 00:12:36.816 "trsvcid": "4420", 00:12:36.816 "trtype": "TCP" 00:12:36.816 }, 00:12:36.816 "peer_address": { 00:12:36.816 "adrfam": "IPv4", 00:12:36.816 "traddr": "10.0.0.1", 00:12:36.816 "trsvcid": "50806", 00:12:36.816 "trtype": "TCP" 00:12:36.816 }, 00:12:36.816 "qid": 0, 00:12:36.816 "state": "enabled", 00:12:36.816 "thread": "nvmf_tgt_poll_group_000" 00:12:36.816 } 00:12:36.816 ]' 00:12:36.816 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:36.816 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:36.816 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:36.816 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:36.816 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:36.816 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.816 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.816 16:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.074 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:00:YmMxOTE2ZGJmY2E3MzljYjIxYmI0MTZlMGYzZTBiYzZiMjMzN2M2ZmE3YzJlZTgz9JREJw==: --dhchap-ctrl-secret DHHC-1:03:OTQ1MGQ4ODRkOTkyMjFmN2QzMjJlMGM4OWU0YmEyMTA3MDMwZmEzNTg1ZTJhNjJhYmY1OWIyYTE0OTJjMjEwYxrxhaM=: 00:12:38.006 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.006 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:38.006 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.006 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.006 16:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.006 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:38.006 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:38.006 16:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:38.006 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:12:38.006 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:38.006 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:38.006 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:38.006 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:38.006 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.006 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.006 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.006 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.006 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.006 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.006 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.267 00:12:38.267 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:38.267 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:38.267 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.832 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.832 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.833 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.833 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.833 16:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.833 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:38.833 { 00:12:38.833 "auth": { 00:12:38.833 "dhgroup": "null", 00:12:38.833 "digest": "sha384", 00:12:38.833 "state": "completed" 00:12:38.833 }, 00:12:38.833 "cntlid": 51, 00:12:38.833 "listen_address": { 00:12:38.833 "adrfam": "IPv4", 00:12:38.833 "traddr": "10.0.0.2", 00:12:38.833 "trsvcid": "4420", 00:12:38.833 "trtype": "TCP" 00:12:38.833 }, 00:12:38.833 "peer_address": { 00:12:38.833 "adrfam": "IPv4", 00:12:38.833 "traddr": "10.0.0.1", 00:12:38.833 "trsvcid": "59332", 00:12:38.833 "trtype": "TCP" 00:12:38.833 }, 00:12:38.833 "qid": 0, 00:12:38.833 "state": "enabled", 00:12:38.833 "thread": "nvmf_tgt_poll_group_000" 00:12:38.833 } 00:12:38.833 ]' 00:12:38.833 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:38.833 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:38.833 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:38.833 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:38.833 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:38.833 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.833 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.833 16:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.090 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:01:MDljODhiMGM5MDdkZDA5NTUxYTc5ZWNlYjU3MmJlZmaPL6kQ: --dhchap-ctrl-secret DHHC-1:02:YTRjYzkwNGMxZTU2NjNmMzM2NTgwMDk5OTUyZmU0NDUzNWE5ZDgyZDYyY2E3OTIyAd/Y0g==: 00:12:39.655 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.655 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:39.655 16:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.655 16:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.655 16:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.655 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:39.655 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:39.655 16:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:39.913 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:12:39.913 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:39.913 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:39.913 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:39.913 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:39.913 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.913 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.913 16:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.913 16:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.913 16:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.913 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.913 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.171 00:12:40.429 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:40.429 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:40.429 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.687 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.687 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.687 16:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.687 16:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.687 16:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.687 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:40.687 { 00:12:40.687 "auth": { 00:12:40.687 "dhgroup": "null", 00:12:40.687 "digest": "sha384", 00:12:40.687 "state": "completed" 00:12:40.687 }, 00:12:40.687 "cntlid": 53, 00:12:40.687 "listen_address": { 00:12:40.687 "adrfam": "IPv4", 00:12:40.687 "traddr": "10.0.0.2", 00:12:40.687 "trsvcid": "4420", 00:12:40.687 "trtype": "TCP" 00:12:40.687 }, 00:12:40.687 "peer_address": { 00:12:40.687 "adrfam": "IPv4", 00:12:40.687 "traddr": "10.0.0.1", 00:12:40.687 "trsvcid": "59354", 00:12:40.687 "trtype": "TCP" 00:12:40.687 }, 00:12:40.687 "qid": 0, 00:12:40.687 "state": "enabled", 00:12:40.687 "thread": "nvmf_tgt_poll_group_000" 00:12:40.687 } 00:12:40.687 ]' 00:12:40.687 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:40.687 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:40.687 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:40.687 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:40.687 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:40.687 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.687 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.687 16:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.944 16:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:02:YjVhNjVjYzAzOWM0ZWU3N2FjN2ViODMwZDlhZGVjOGI1YTg4MTE1NjIxZTBhZTQzoDY2NA==: --dhchap-ctrl-secret DHHC-1:01:Y2U5YzFjZDgwMWFkNTc5Zjk4NzgzYzg5ZmIzY2JmMzPEv3z1: 00:12:41.932 16:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.932 16:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:41.932 16:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.932 16:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.932 16:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.932 16:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:41.932 16:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:41.932 16:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:41.932 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:12:41.932 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:41.932 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:41.932 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:41.932 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:41.932 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.932 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key3 00:12:41.932 16:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.932 16:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.932 16:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.932 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:41.932 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:42.190 00:12:42.190 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:42.190 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.190 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:42.447 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.447 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.447 16:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.447 16:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.447 16:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.447 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:42.447 { 00:12:42.447 "auth": { 00:12:42.447 "dhgroup": "null", 00:12:42.447 "digest": "sha384", 00:12:42.447 "state": "completed" 00:12:42.447 }, 00:12:42.447 "cntlid": 55, 00:12:42.447 "listen_address": { 00:12:42.447 "adrfam": "IPv4", 00:12:42.447 "traddr": "10.0.0.2", 00:12:42.447 "trsvcid": "4420", 00:12:42.447 "trtype": "TCP" 00:12:42.447 }, 00:12:42.447 "peer_address": { 00:12:42.447 "adrfam": "IPv4", 00:12:42.447 "traddr": "10.0.0.1", 00:12:42.447 "trsvcid": "59378", 00:12:42.447 "trtype": "TCP" 00:12:42.447 }, 00:12:42.447 "qid": 0, 00:12:42.447 "state": "enabled", 00:12:42.447 "thread": "nvmf_tgt_poll_group_000" 00:12:42.447 } 00:12:42.447 ]' 00:12:42.447 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:42.705 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:42.705 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:42.706 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:42.706 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:42.706 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.706 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.706 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.964 16:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:03:NGY0ZDQ5ZDgxNmE5MmFjMDU3ZDVlMTMyY2I1Y2E0NTk5NThmMmRmZDI3YWY2ODk1Y2JmNDc3YTI5YmFlYTU0M3NAE8c=: 00:12:43.530 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.530 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:43.530 16:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.531 16:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.531 16:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.531 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:43.531 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:43.531 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:43.531 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:43.788 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:12:43.788 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:43.788 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:43.788 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:43.788 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:43.788 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.788 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.788 16:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.788 16:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.788 16:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.788 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.789 16:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.047 00:12:44.047 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:44.047 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:44.047 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.304 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.304 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.304 16:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.304 16:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.304 16:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.304 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:44.304 { 00:12:44.304 "auth": { 00:12:44.304 "dhgroup": "ffdhe2048", 00:12:44.304 "digest": "sha384", 00:12:44.304 "state": "completed" 00:12:44.304 }, 00:12:44.304 "cntlid": 57, 00:12:44.304 "listen_address": { 00:12:44.304 "adrfam": "IPv4", 00:12:44.304 "traddr": "10.0.0.2", 00:12:44.304 "trsvcid": "4420", 00:12:44.304 "trtype": "TCP" 00:12:44.304 }, 00:12:44.304 "peer_address": { 00:12:44.304 "adrfam": "IPv4", 00:12:44.304 "traddr": "10.0.0.1", 00:12:44.304 "trsvcid": "59404", 00:12:44.304 "trtype": "TCP" 00:12:44.304 }, 00:12:44.304 "qid": 0, 00:12:44.304 "state": "enabled", 00:12:44.304 "thread": "nvmf_tgt_poll_group_000" 00:12:44.304 } 00:12:44.304 ]' 00:12:44.304 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:44.304 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:44.304 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:44.304 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:44.304 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:44.561 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.561 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.561 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.818 16:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:00:YmMxOTE2ZGJmY2E3MzljYjIxYmI0MTZlMGYzZTBiYzZiMjMzN2M2ZmE3YzJlZTgz9JREJw==: --dhchap-ctrl-secret DHHC-1:03:OTQ1MGQ4ODRkOTkyMjFmN2QzMjJlMGM4OWU0YmEyMTA3MDMwZmEzNTg1ZTJhNjJhYmY1OWIyYTE0OTJjMjEwYxrxhaM=: 00:12:45.383 16:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.383 16:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:45.383 16:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.383 16:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.383 16:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.383 16:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:45.383 16:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:45.383 16:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:45.641 16:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:12:45.641 16:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:45.641 16:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:45.641 16:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:45.641 16:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:45.641 16:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.641 16:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.641 16:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.641 16:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.641 16:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.641 16:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.641 16:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.899 00:12:45.899 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:45.899 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:45.899 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.157 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.157 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.157 16:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.157 16:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.157 16:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.157 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:46.157 { 00:12:46.157 "auth": { 00:12:46.157 "dhgroup": "ffdhe2048", 00:12:46.157 "digest": "sha384", 00:12:46.157 "state": "completed" 00:12:46.157 }, 00:12:46.157 "cntlid": 59, 00:12:46.157 "listen_address": { 00:12:46.157 "adrfam": "IPv4", 00:12:46.157 "traddr": "10.0.0.2", 00:12:46.157 "trsvcid": "4420", 00:12:46.157 "trtype": "TCP" 00:12:46.157 }, 00:12:46.157 "peer_address": { 00:12:46.157 "adrfam": "IPv4", 00:12:46.157 "traddr": "10.0.0.1", 00:12:46.157 "trsvcid": "59426", 00:12:46.157 "trtype": "TCP" 00:12:46.157 }, 00:12:46.157 "qid": 0, 00:12:46.157 "state": "enabled", 00:12:46.157 "thread": "nvmf_tgt_poll_group_000" 00:12:46.157 } 00:12:46.157 ]' 00:12:46.157 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:46.415 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:46.415 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:46.415 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:46.415 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:46.415 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.415 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.415 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.673 16:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:01:MDljODhiMGM5MDdkZDA5NTUxYTc5ZWNlYjU3MmJlZmaPL6kQ: --dhchap-ctrl-secret DHHC-1:02:YTRjYzkwNGMxZTU2NjNmMzM2NTgwMDk5OTUyZmU0NDUzNWE5ZDgyZDYyY2E3OTIyAd/Y0g==: 00:12:47.306 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.306 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:47.306 16:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.306 16:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.306 16:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.306 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:47.306 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:47.306 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:47.579 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:12:47.579 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:47.579 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:47.579 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:47.579 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:47.579 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.579 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.579 16:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.579 16:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.579 16:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.579 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.579 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.837 00:12:47.837 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:47.837 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.837 16:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:48.106 16:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.106 16:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.106 16:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.106 16:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.106 16:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.106 16:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:48.106 { 00:12:48.106 "auth": { 00:12:48.106 "dhgroup": "ffdhe2048", 00:12:48.106 "digest": "sha384", 00:12:48.106 "state": "completed" 00:12:48.106 }, 00:12:48.106 "cntlid": 61, 00:12:48.106 "listen_address": { 00:12:48.106 "adrfam": "IPv4", 00:12:48.106 "traddr": "10.0.0.2", 00:12:48.106 "trsvcid": "4420", 00:12:48.106 "trtype": "TCP" 00:12:48.106 }, 00:12:48.106 "peer_address": { 00:12:48.106 "adrfam": "IPv4", 00:12:48.106 "traddr": "10.0.0.1", 00:12:48.106 "trsvcid": "59452", 00:12:48.106 "trtype": "TCP" 00:12:48.106 }, 00:12:48.106 "qid": 0, 00:12:48.106 "state": "enabled", 00:12:48.106 "thread": "nvmf_tgt_poll_group_000" 00:12:48.106 } 00:12:48.106 ]' 00:12:48.106 16:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:48.106 16:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:48.106 16:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:48.106 16:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:48.106 16:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:48.106 16:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.106 16:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.106 16:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.370 16:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:02:YjVhNjVjYzAzOWM0ZWU3N2FjN2ViODMwZDlhZGVjOGI1YTg4MTE1NjIxZTBhZTQzoDY2NA==: --dhchap-ctrl-secret DHHC-1:01:Y2U5YzFjZDgwMWFkNTc5Zjk4NzgzYzg5ZmIzY2JmMzPEv3z1: 00:12:49.300 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.300 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:49.300 16:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.300 16:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.300 16:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.300 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:49.300 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:49.300 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:49.300 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:12:49.300 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:49.300 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:49.300 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:49.300 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:49.300 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.300 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key3 00:12:49.300 16:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.300 16:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.300 16:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.300 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:49.300 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:49.865 00:12:49.865 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:49.865 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:49.865 16:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.865 16:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.865 16:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.865 16:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.865 16:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.122 16:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.122 16:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:50.122 { 00:12:50.122 "auth": { 00:12:50.122 "dhgroup": "ffdhe2048", 00:12:50.122 "digest": "sha384", 00:12:50.122 "state": "completed" 00:12:50.122 }, 00:12:50.122 "cntlid": 63, 00:12:50.122 "listen_address": { 00:12:50.122 "adrfam": "IPv4", 00:12:50.122 "traddr": "10.0.0.2", 00:12:50.122 "trsvcid": "4420", 00:12:50.122 "trtype": "TCP" 00:12:50.122 }, 00:12:50.122 "peer_address": { 00:12:50.122 "adrfam": "IPv4", 00:12:50.122 "traddr": "10.0.0.1", 00:12:50.122 "trsvcid": "57392", 00:12:50.122 "trtype": "TCP" 00:12:50.122 }, 00:12:50.122 "qid": 0, 00:12:50.122 "state": "enabled", 00:12:50.122 "thread": "nvmf_tgt_poll_group_000" 00:12:50.122 } 00:12:50.122 ]' 00:12:50.122 16:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:50.122 16:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:50.122 16:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:50.122 16:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:50.122 16:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:50.122 16:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:50.122 16:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:50.122 16:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.379 16:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:03:NGY0ZDQ5ZDgxNmE5MmFjMDU3ZDVlMTMyY2I1Y2E0NTk5NThmMmRmZDI3YWY2ODk1Y2JmNDc3YTI5YmFlYTU0M3NAE8c=: 00:12:50.945 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.945 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:50.945 16:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.945 16:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.945 16:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.945 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:50.945 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:50.945 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:50.945 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:51.203 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:12:51.203 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:51.203 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:51.203 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:51.203 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:51.203 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.203 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:51.203 16:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.203 16:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.203 16:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.203 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:51.203 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:51.461 00:12:51.461 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:51.461 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:51.461 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.719 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.719 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.719 16:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.719 16:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.719 16:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.719 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:51.719 { 00:12:51.719 "auth": { 00:12:51.719 "dhgroup": "ffdhe3072", 00:12:51.719 "digest": "sha384", 00:12:51.719 "state": "completed" 00:12:51.719 }, 00:12:51.719 "cntlid": 65, 00:12:51.719 "listen_address": { 00:12:51.719 "adrfam": "IPv4", 00:12:51.719 "traddr": "10.0.0.2", 00:12:51.719 "trsvcid": "4420", 00:12:51.719 "trtype": "TCP" 00:12:51.719 }, 00:12:51.719 "peer_address": { 00:12:51.719 "adrfam": "IPv4", 00:12:51.719 "traddr": "10.0.0.1", 00:12:51.719 "trsvcid": "57420", 00:12:51.719 "trtype": "TCP" 00:12:51.719 }, 00:12:51.719 "qid": 0, 00:12:51.719 "state": "enabled", 00:12:51.719 "thread": "nvmf_tgt_poll_group_000" 00:12:51.719 } 00:12:51.719 ]' 00:12:51.719 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:51.719 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:51.719 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:51.719 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:51.719 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:51.977 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.977 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.977 16:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.235 16:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:00:YmMxOTE2ZGJmY2E3MzljYjIxYmI0MTZlMGYzZTBiYzZiMjMzN2M2ZmE3YzJlZTgz9JREJw==: --dhchap-ctrl-secret DHHC-1:03:OTQ1MGQ4ODRkOTkyMjFmN2QzMjJlMGM4OWU0YmEyMTA3MDMwZmEzNTg1ZTJhNjJhYmY1OWIyYTE0OTJjMjEwYxrxhaM=: 00:12:52.802 16:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.802 16:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:52.802 16:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.802 16:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.802 16:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.802 16:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:52.802 16:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:52.802 16:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:53.060 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:12:53.060 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:53.060 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:53.060 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:53.060 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:53.060 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.060 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.060 16:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.060 16:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.060 16:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.060 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.060 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.318 00:12:53.318 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:53.318 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.318 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:53.576 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.576 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.576 16:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.576 16:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.576 16:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.576 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:53.576 { 00:12:53.576 "auth": { 00:12:53.576 "dhgroup": "ffdhe3072", 00:12:53.577 "digest": "sha384", 00:12:53.577 "state": "completed" 00:12:53.577 }, 00:12:53.577 "cntlid": 67, 00:12:53.577 "listen_address": { 00:12:53.577 "adrfam": "IPv4", 00:12:53.577 "traddr": "10.0.0.2", 00:12:53.577 "trsvcid": "4420", 00:12:53.577 "trtype": "TCP" 00:12:53.577 }, 00:12:53.577 "peer_address": { 00:12:53.577 "adrfam": "IPv4", 00:12:53.577 "traddr": "10.0.0.1", 00:12:53.577 "trsvcid": "57456", 00:12:53.577 "trtype": "TCP" 00:12:53.577 }, 00:12:53.577 "qid": 0, 00:12:53.577 "state": "enabled", 00:12:53.577 "thread": "nvmf_tgt_poll_group_000" 00:12:53.577 } 00:12:53.577 ]' 00:12:53.577 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:53.577 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:53.577 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:53.577 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:53.577 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:53.836 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.836 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.836 16:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.094 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:01:MDljODhiMGM5MDdkZDA5NTUxYTc5ZWNlYjU3MmJlZmaPL6kQ: --dhchap-ctrl-secret DHHC-1:02:YTRjYzkwNGMxZTU2NjNmMzM2NTgwMDk5OTUyZmU0NDUzNWE5ZDgyZDYyY2E3OTIyAd/Y0g==: 00:12:54.661 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.661 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:54.661 16:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.661 16:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.661 16:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.661 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:54.661 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:54.661 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:54.925 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:12:54.925 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:54.925 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:54.925 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:54.925 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:54.925 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.925 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:54.925 16:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.925 16:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.925 16:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.925 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:54.925 16:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.182 00:12:55.182 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:55.182 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.182 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:55.439 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.439 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.439 16:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.439 16:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.439 16:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.439 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:55.439 { 00:12:55.439 "auth": { 00:12:55.439 "dhgroup": "ffdhe3072", 00:12:55.439 "digest": "sha384", 00:12:55.439 "state": "completed" 00:12:55.439 }, 00:12:55.439 "cntlid": 69, 00:12:55.439 "listen_address": { 00:12:55.439 "adrfam": "IPv4", 00:12:55.439 "traddr": "10.0.0.2", 00:12:55.439 "trsvcid": "4420", 00:12:55.439 "trtype": "TCP" 00:12:55.439 }, 00:12:55.439 "peer_address": { 00:12:55.439 "adrfam": "IPv4", 00:12:55.439 "traddr": "10.0.0.1", 00:12:55.439 "trsvcid": "57480", 00:12:55.439 "trtype": "TCP" 00:12:55.439 }, 00:12:55.439 "qid": 0, 00:12:55.439 "state": "enabled", 00:12:55.439 "thread": "nvmf_tgt_poll_group_000" 00:12:55.439 } 00:12:55.439 ]' 00:12:55.439 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:55.439 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:55.439 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:55.439 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:55.439 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:55.439 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.439 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.439 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.696 16:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:02:YjVhNjVjYzAzOWM0ZWU3N2FjN2ViODMwZDlhZGVjOGI1YTg4MTE1NjIxZTBhZTQzoDY2NA==: --dhchap-ctrl-secret DHHC-1:01:Y2U5YzFjZDgwMWFkNTc5Zjk4NzgzYzg5ZmIzY2JmMzPEv3z1: 00:12:56.260 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.260 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:56.260 16:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.260 16:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.260 16:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.260 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:56.260 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:56.260 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:56.517 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:12:56.517 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:56.518 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:56.518 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:56.518 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:56.518 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.518 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key3 00:12:56.518 16:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.518 16:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.518 16:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.518 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:56.518 16:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:57.081 00:12:57.081 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:57.081 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:57.081 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.339 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.339 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.339 16:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.339 16:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.339 16:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.339 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:57.339 { 00:12:57.339 "auth": { 00:12:57.339 "dhgroup": "ffdhe3072", 00:12:57.339 "digest": "sha384", 00:12:57.339 "state": "completed" 00:12:57.339 }, 00:12:57.339 "cntlid": 71, 00:12:57.339 "listen_address": { 00:12:57.339 "adrfam": "IPv4", 00:12:57.339 "traddr": "10.0.0.2", 00:12:57.339 "trsvcid": "4420", 00:12:57.339 "trtype": "TCP" 00:12:57.339 }, 00:12:57.339 "peer_address": { 00:12:57.339 "adrfam": "IPv4", 00:12:57.339 "traddr": "10.0.0.1", 00:12:57.339 "trsvcid": "57520", 00:12:57.339 "trtype": "TCP" 00:12:57.339 }, 00:12:57.339 "qid": 0, 00:12:57.339 "state": "enabled", 00:12:57.339 "thread": "nvmf_tgt_poll_group_000" 00:12:57.339 } 00:12:57.339 ]' 00:12:57.339 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:57.339 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:57.339 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:57.339 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:57.339 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:57.339 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.339 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.339 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.597 16:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:03:NGY0ZDQ5ZDgxNmE5MmFjMDU3ZDVlMTMyY2I1Y2E0NTk5NThmMmRmZDI3YWY2ODk1Y2JmNDc3YTI5YmFlYTU0M3NAE8c=: 00:12:58.161 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.161 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:12:58.161 16:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.161 16:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.161 16:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.161 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:58.161 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:58.161 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:58.161 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:58.419 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:12:58.419 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:58.419 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:58.419 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:58.419 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:58.419 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.419 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:58.419 16:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.419 16:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.419 16:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.419 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:58.419 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:58.686 00:12:58.944 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:58.944 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:58.944 16:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.201 16:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.201 16:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.201 16:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.201 16:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.201 16:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.201 16:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:59.201 { 00:12:59.201 "auth": { 00:12:59.201 "dhgroup": "ffdhe4096", 00:12:59.201 "digest": "sha384", 00:12:59.201 "state": "completed" 00:12:59.201 }, 00:12:59.201 "cntlid": 73, 00:12:59.201 "listen_address": { 00:12:59.201 "adrfam": "IPv4", 00:12:59.201 "traddr": "10.0.0.2", 00:12:59.201 "trsvcid": "4420", 00:12:59.201 "trtype": "TCP" 00:12:59.201 }, 00:12:59.201 "peer_address": { 00:12:59.201 "adrfam": "IPv4", 00:12:59.201 "traddr": "10.0.0.1", 00:12:59.201 "trsvcid": "46444", 00:12:59.201 "trtype": "TCP" 00:12:59.201 }, 00:12:59.201 "qid": 0, 00:12:59.201 "state": "enabled", 00:12:59.201 "thread": "nvmf_tgt_poll_group_000" 00:12:59.201 } 00:12:59.201 ]' 00:12:59.201 16:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:59.201 16:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:59.202 16:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:59.202 16:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:59.202 16:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:59.202 16:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.202 16:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.202 16:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.459 16:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:00:YmMxOTE2ZGJmY2E3MzljYjIxYmI0MTZlMGYzZTBiYzZiMjMzN2M2ZmE3YzJlZTgz9JREJw==: --dhchap-ctrl-secret DHHC-1:03:OTQ1MGQ4ODRkOTkyMjFmN2QzMjJlMGM4OWU0YmEyMTA3MDMwZmEzNTg1ZTJhNjJhYmY1OWIyYTE0OTJjMjEwYxrxhaM=: 00:13:00.399 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.399 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:00.399 16:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.399 16:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.399 16:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.399 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:00.399 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:00.399 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:00.399 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:13:00.399 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:00.399 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:00.399 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:00.399 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:00.399 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.400 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:00.400 16:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.400 16:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.400 16:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.400 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:00.400 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:00.684 00:13:00.684 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:00.684 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:00.684 16:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.948 16:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.948 16:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.948 16:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.948 16:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.948 16:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.948 16:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:00.948 { 00:13:00.948 "auth": { 00:13:00.948 "dhgroup": "ffdhe4096", 00:13:00.948 "digest": "sha384", 00:13:00.948 "state": "completed" 00:13:00.948 }, 00:13:00.948 "cntlid": 75, 00:13:00.948 "listen_address": { 00:13:00.948 "adrfam": "IPv4", 00:13:00.948 "traddr": "10.0.0.2", 00:13:00.948 "trsvcid": "4420", 00:13:00.948 "trtype": "TCP" 00:13:00.948 }, 00:13:00.948 "peer_address": { 00:13:00.948 "adrfam": "IPv4", 00:13:00.948 "traddr": "10.0.0.1", 00:13:00.948 "trsvcid": "46468", 00:13:00.948 "trtype": "TCP" 00:13:00.948 }, 00:13:00.948 "qid": 0, 00:13:00.948 "state": "enabled", 00:13:00.948 "thread": "nvmf_tgt_poll_group_000" 00:13:00.948 } 00:13:00.948 ]' 00:13:00.948 16:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:01.206 16:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:01.206 16:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:01.206 16:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:01.206 16:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:01.206 16:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.206 16:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.206 16:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.464 16:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:01:MDljODhiMGM5MDdkZDA5NTUxYTc5ZWNlYjU3MmJlZmaPL6kQ: --dhchap-ctrl-secret DHHC-1:02:YTRjYzkwNGMxZTU2NjNmMzM2NTgwMDk5OTUyZmU0NDUzNWE5ZDgyZDYyY2E3OTIyAd/Y0g==: 00:13:02.029 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.029 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:02.029 16:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.029 16:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.029 16:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.029 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:02.029 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:02.029 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:02.287 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:13:02.287 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:02.287 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:02.287 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:02.287 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:02.287 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.287 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:02.287 16:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.287 16:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.287 16:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.287 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:02.287 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:02.853 00:13:02.853 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:02.853 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:02.853 16:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.853 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.853 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.853 16:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.853 16:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.853 16:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.853 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:02.853 { 00:13:02.853 "auth": { 00:13:02.853 "dhgroup": "ffdhe4096", 00:13:02.853 "digest": "sha384", 00:13:02.853 "state": "completed" 00:13:02.853 }, 00:13:02.853 "cntlid": 77, 00:13:02.853 "listen_address": { 00:13:02.853 "adrfam": "IPv4", 00:13:02.853 "traddr": "10.0.0.2", 00:13:02.853 "trsvcid": "4420", 00:13:02.853 "trtype": "TCP" 00:13:02.853 }, 00:13:02.853 "peer_address": { 00:13:02.853 "adrfam": "IPv4", 00:13:02.853 "traddr": "10.0.0.1", 00:13:02.853 "trsvcid": "46486", 00:13:02.853 "trtype": "TCP" 00:13:02.853 }, 00:13:02.853 "qid": 0, 00:13:02.853 "state": "enabled", 00:13:02.853 "thread": "nvmf_tgt_poll_group_000" 00:13:02.853 } 00:13:02.853 ]' 00:13:02.853 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:03.111 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:03.111 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:03.111 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:03.111 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:03.111 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.111 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.111 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.368 16:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:02:YjVhNjVjYzAzOWM0ZWU3N2FjN2ViODMwZDlhZGVjOGI1YTg4MTE1NjIxZTBhZTQzoDY2NA==: --dhchap-ctrl-secret DHHC-1:01:Y2U5YzFjZDgwMWFkNTc5Zjk4NzgzYzg5ZmIzY2JmMzPEv3z1: 00:13:03.934 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.934 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:03.934 16:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.934 16:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.934 16:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.934 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:03.934 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:03.934 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:04.192 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:13:04.192 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:04.192 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:04.192 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:04.192 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:04.192 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.192 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key3 00:13:04.192 16:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.192 16:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.192 16:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.192 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:04.192 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:04.449 00:13:04.449 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:04.449 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:04.449 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.706 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.706 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.706 16:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.706 16:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.706 16:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.706 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:04.706 { 00:13:04.706 "auth": { 00:13:04.706 "dhgroup": "ffdhe4096", 00:13:04.706 "digest": "sha384", 00:13:04.706 "state": "completed" 00:13:04.706 }, 00:13:04.706 "cntlid": 79, 00:13:04.706 "listen_address": { 00:13:04.706 "adrfam": "IPv4", 00:13:04.706 "traddr": "10.0.0.2", 00:13:04.706 "trsvcid": "4420", 00:13:04.706 "trtype": "TCP" 00:13:04.706 }, 00:13:04.706 "peer_address": { 00:13:04.706 "adrfam": "IPv4", 00:13:04.706 "traddr": "10.0.0.1", 00:13:04.706 "trsvcid": "46510", 00:13:04.706 "trtype": "TCP" 00:13:04.706 }, 00:13:04.706 "qid": 0, 00:13:04.706 "state": "enabled", 00:13:04.706 "thread": "nvmf_tgt_poll_group_000" 00:13:04.706 } 00:13:04.706 ]' 00:13:04.706 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:04.963 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:04.963 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:04.963 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:04.963 16:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:04.963 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.963 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.963 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.220 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:03:NGY0ZDQ5ZDgxNmE5MmFjMDU3ZDVlMTMyY2I1Y2E0NTk5NThmMmRmZDI3YWY2ODk1Y2JmNDc3YTI5YmFlYTU0M3NAE8c=: 00:13:05.785 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.785 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:05.785 16:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.785 16:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.785 16:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.785 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:05.785 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:05.785 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:05.785 16:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:06.042 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:13:06.042 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:06.042 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:06.042 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:06.042 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:06.042 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.042 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:06.042 16:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.042 16:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.042 16:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.042 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:06.042 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:06.300 00:13:06.300 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:06.300 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:06.300 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.558 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.558 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.558 16:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.558 16:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.558 16:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.558 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:06.558 { 00:13:06.558 "auth": { 00:13:06.558 "dhgroup": "ffdhe6144", 00:13:06.558 "digest": "sha384", 00:13:06.558 "state": "completed" 00:13:06.558 }, 00:13:06.558 "cntlid": 81, 00:13:06.558 "listen_address": { 00:13:06.558 "adrfam": "IPv4", 00:13:06.558 "traddr": "10.0.0.2", 00:13:06.558 "trsvcid": "4420", 00:13:06.558 "trtype": "TCP" 00:13:06.558 }, 00:13:06.558 "peer_address": { 00:13:06.558 "adrfam": "IPv4", 00:13:06.558 "traddr": "10.0.0.1", 00:13:06.558 "trsvcid": "46540", 00:13:06.558 "trtype": "TCP" 00:13:06.558 }, 00:13:06.558 "qid": 0, 00:13:06.558 "state": "enabled", 00:13:06.558 "thread": "nvmf_tgt_poll_group_000" 00:13:06.558 } 00:13:06.558 ]' 00:13:06.558 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:06.816 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:06.816 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:06.816 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:06.816 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:06.816 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.816 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.816 16:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.074 16:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:00:YmMxOTE2ZGJmY2E3MzljYjIxYmI0MTZlMGYzZTBiYzZiMjMzN2M2ZmE3YzJlZTgz9JREJw==: --dhchap-ctrl-secret DHHC-1:03:OTQ1MGQ4ODRkOTkyMjFmN2QzMjJlMGM4OWU0YmEyMTA3MDMwZmEzNTg1ZTJhNjJhYmY1OWIyYTE0OTJjMjEwYxrxhaM=: 00:13:07.640 16:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.640 16:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:07.640 16:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.640 16:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.640 16:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.640 16:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:07.640 16:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:07.640 16:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:07.905 16:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:13:07.905 16:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:07.905 16:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:07.905 16:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:07.905 16:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:07.905 16:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.905 16:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:07.905 16:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.905 16:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.905 16:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.905 16:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:07.905 16:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:08.163 00:13:08.421 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:08.421 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:08.421 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.421 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.421 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.421 16:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.421 16:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.421 16:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.421 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:08.421 { 00:13:08.421 "auth": { 00:13:08.421 "dhgroup": "ffdhe6144", 00:13:08.421 "digest": "sha384", 00:13:08.421 "state": "completed" 00:13:08.421 }, 00:13:08.421 "cntlid": 83, 00:13:08.421 "listen_address": { 00:13:08.421 "adrfam": "IPv4", 00:13:08.421 "traddr": "10.0.0.2", 00:13:08.421 "trsvcid": "4420", 00:13:08.421 "trtype": "TCP" 00:13:08.421 }, 00:13:08.421 "peer_address": { 00:13:08.421 "adrfam": "IPv4", 00:13:08.421 "traddr": "10.0.0.1", 00:13:08.421 "trsvcid": "46568", 00:13:08.421 "trtype": "TCP" 00:13:08.421 }, 00:13:08.421 "qid": 0, 00:13:08.421 "state": "enabled", 00:13:08.421 "thread": "nvmf_tgt_poll_group_000" 00:13:08.421 } 00:13:08.421 ]' 00:13:08.421 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:08.678 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:08.678 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:08.678 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:08.678 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:08.678 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.678 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.678 16:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.935 16:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:01:MDljODhiMGM5MDdkZDA5NTUxYTc5ZWNlYjU3MmJlZmaPL6kQ: --dhchap-ctrl-secret DHHC-1:02:YTRjYzkwNGMxZTU2NjNmMzM2NTgwMDk5OTUyZmU0NDUzNWE5ZDgyZDYyY2E3OTIyAd/Y0g==: 00:13:09.500 16:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.500 16:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:09.500 16:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.500 16:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.500 16:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.500 16:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:09.500 16:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:09.500 16:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:09.757 16:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:13:09.757 16:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:09.757 16:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:09.757 16:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:09.757 16:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:09.757 16:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.757 16:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:09.757 16:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.757 16:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.757 16:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.757 16:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:09.757 16:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:10.323 00:13:10.323 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:10.323 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:10.323 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.581 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.581 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.581 16:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.581 16:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.581 16:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.581 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:10.581 { 00:13:10.581 "auth": { 00:13:10.581 "dhgroup": "ffdhe6144", 00:13:10.581 "digest": "sha384", 00:13:10.581 "state": "completed" 00:13:10.581 }, 00:13:10.581 "cntlid": 85, 00:13:10.581 "listen_address": { 00:13:10.581 "adrfam": "IPv4", 00:13:10.581 "traddr": "10.0.0.2", 00:13:10.581 "trsvcid": "4420", 00:13:10.581 "trtype": "TCP" 00:13:10.581 }, 00:13:10.581 "peer_address": { 00:13:10.581 "adrfam": "IPv4", 00:13:10.581 "traddr": "10.0.0.1", 00:13:10.581 "trsvcid": "57070", 00:13:10.581 "trtype": "TCP" 00:13:10.581 }, 00:13:10.581 "qid": 0, 00:13:10.581 "state": "enabled", 00:13:10.581 "thread": "nvmf_tgt_poll_group_000" 00:13:10.581 } 00:13:10.581 ]' 00:13:10.581 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:10.581 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:10.581 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:10.839 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:10.839 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:10.839 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.839 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.839 16:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.097 16:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:02:YjVhNjVjYzAzOWM0ZWU3N2FjN2ViODMwZDlhZGVjOGI1YTg4MTE1NjIxZTBhZTQzoDY2NA==: --dhchap-ctrl-secret DHHC-1:01:Y2U5YzFjZDgwMWFkNTc5Zjk4NzgzYzg5ZmIzY2JmMzPEv3z1: 00:13:11.663 16:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.663 16:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:11.663 16:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.663 16:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.663 16:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.663 16:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:11.663 16:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:11.663 16:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:11.921 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:13:11.921 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:11.921 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:11.921 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:11.921 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:11.921 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:11.921 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key3 00:13:11.921 16:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.921 16:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.921 16:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.921 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:11.921 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:12.486 00:13:12.486 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:12.486 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:12.486 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.745 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.745 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.745 16:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.745 16:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.745 16:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.745 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:12.745 { 00:13:12.745 "auth": { 00:13:12.745 "dhgroup": "ffdhe6144", 00:13:12.745 "digest": "sha384", 00:13:12.745 "state": "completed" 00:13:12.745 }, 00:13:12.745 "cntlid": 87, 00:13:12.745 "listen_address": { 00:13:12.745 "adrfam": "IPv4", 00:13:12.745 "traddr": "10.0.0.2", 00:13:12.745 "trsvcid": "4420", 00:13:12.745 "trtype": "TCP" 00:13:12.745 }, 00:13:12.745 "peer_address": { 00:13:12.745 "adrfam": "IPv4", 00:13:12.745 "traddr": "10.0.0.1", 00:13:12.745 "trsvcid": "57094", 00:13:12.745 "trtype": "TCP" 00:13:12.745 }, 00:13:12.745 "qid": 0, 00:13:12.745 "state": "enabled", 00:13:12.745 "thread": "nvmf_tgt_poll_group_000" 00:13:12.745 } 00:13:12.745 ]' 00:13:12.745 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:12.745 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:12.745 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:13.003 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:13.003 16:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:13.003 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.003 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.003 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.261 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:03:NGY0ZDQ5ZDgxNmE5MmFjMDU3ZDVlMTMyY2I1Y2E0NTk5NThmMmRmZDI3YWY2ODk1Y2JmNDc3YTI5YmFlYTU0M3NAE8c=: 00:13:13.843 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:13.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:13.843 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:13.843 16:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.843 16:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.843 16:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.843 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:13.843 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:13.843 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:13.843 16:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:14.105 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:13:14.105 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:14.105 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:14.105 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:14.105 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:14.105 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.106 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:14.106 16:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.106 16:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.106 16:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.106 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:14.106 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:14.672 00:13:14.672 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:14.672 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.672 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:14.930 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.930 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.930 16:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.930 16:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.930 16:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.930 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:14.930 { 00:13:14.930 "auth": { 00:13:14.930 "dhgroup": "ffdhe8192", 00:13:14.930 "digest": "sha384", 00:13:14.930 "state": "completed" 00:13:14.930 }, 00:13:14.930 "cntlid": 89, 00:13:14.930 "listen_address": { 00:13:14.930 "adrfam": "IPv4", 00:13:14.930 "traddr": "10.0.0.2", 00:13:14.930 "trsvcid": "4420", 00:13:14.930 "trtype": "TCP" 00:13:14.930 }, 00:13:14.930 "peer_address": { 00:13:14.930 "adrfam": "IPv4", 00:13:14.930 "traddr": "10.0.0.1", 00:13:14.930 "trsvcid": "57114", 00:13:14.930 "trtype": "TCP" 00:13:14.930 }, 00:13:14.930 "qid": 0, 00:13:14.930 "state": "enabled", 00:13:14.930 "thread": "nvmf_tgt_poll_group_000" 00:13:14.930 } 00:13:14.930 ]' 00:13:14.930 16:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:14.930 16:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:14.930 16:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:14.930 16:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:14.930 16:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:15.188 16:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.188 16:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.188 16:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.445 16:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:00:YmMxOTE2ZGJmY2E3MzljYjIxYmI0MTZlMGYzZTBiYzZiMjMzN2M2ZmE3YzJlZTgz9JREJw==: --dhchap-ctrl-secret DHHC-1:03:OTQ1MGQ4ODRkOTkyMjFmN2QzMjJlMGM4OWU0YmEyMTA3MDMwZmEzNTg1ZTJhNjJhYmY1OWIyYTE0OTJjMjEwYxrxhaM=: 00:13:16.011 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.011 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:16.011 16:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.011 16:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.011 16:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.011 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:16.011 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:16.011 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:16.269 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:13:16.269 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:16.269 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:16.269 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:16.269 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:16.269 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.269 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:16.269 16:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.269 16:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.269 16:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.269 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:16.269 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:16.834 00:13:16.834 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:16.834 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:16.834 16:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:17.092 16:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.092 16:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.092 16:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.092 16:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.092 16:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.092 16:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:17.092 { 00:13:17.092 "auth": { 00:13:17.092 "dhgroup": "ffdhe8192", 00:13:17.092 "digest": "sha384", 00:13:17.092 "state": "completed" 00:13:17.092 }, 00:13:17.092 "cntlid": 91, 00:13:17.092 "listen_address": { 00:13:17.092 "adrfam": "IPv4", 00:13:17.092 "traddr": "10.0.0.2", 00:13:17.092 "trsvcid": "4420", 00:13:17.092 "trtype": "TCP" 00:13:17.092 }, 00:13:17.092 "peer_address": { 00:13:17.092 "adrfam": "IPv4", 00:13:17.092 "traddr": "10.0.0.1", 00:13:17.092 "trsvcid": "57140", 00:13:17.092 "trtype": "TCP" 00:13:17.092 }, 00:13:17.092 "qid": 0, 00:13:17.092 "state": "enabled", 00:13:17.092 "thread": "nvmf_tgt_poll_group_000" 00:13:17.092 } 00:13:17.092 ]' 00:13:17.092 16:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:17.092 16:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:17.092 16:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:17.092 16:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:17.092 16:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:17.092 16:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.092 16:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.092 16:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.658 16:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:01:MDljODhiMGM5MDdkZDA5NTUxYTc5ZWNlYjU3MmJlZmaPL6kQ: --dhchap-ctrl-secret DHHC-1:02:YTRjYzkwNGMxZTU2NjNmMzM2NTgwMDk5OTUyZmU0NDUzNWE5ZDgyZDYyY2E3OTIyAd/Y0g==: 00:13:17.916 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:17.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:17.916 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:17.916 16:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.916 16:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.916 16:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.916 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:17.916 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:17.916 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:18.174 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:13:18.174 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:18.174 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:18.174 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:18.174 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:18.174 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.174 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:18.174 16:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.174 16:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.174 16:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.174 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:18.174 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:18.740 00:13:18.740 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:18.740 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:18.740 16:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:18.997 16:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:18.997 16:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:18.997 16:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.997 16:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.254 16:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.255 16:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:19.255 { 00:13:19.255 "auth": { 00:13:19.255 "dhgroup": "ffdhe8192", 00:13:19.255 "digest": "sha384", 00:13:19.255 "state": "completed" 00:13:19.255 }, 00:13:19.255 "cntlid": 93, 00:13:19.255 "listen_address": { 00:13:19.255 "adrfam": "IPv4", 00:13:19.255 "traddr": "10.0.0.2", 00:13:19.255 "trsvcid": "4420", 00:13:19.255 "trtype": "TCP" 00:13:19.255 }, 00:13:19.255 "peer_address": { 00:13:19.255 "adrfam": "IPv4", 00:13:19.255 "traddr": "10.0.0.1", 00:13:19.255 "trsvcid": "38204", 00:13:19.255 "trtype": "TCP" 00:13:19.255 }, 00:13:19.255 "qid": 0, 00:13:19.255 "state": "enabled", 00:13:19.255 "thread": "nvmf_tgt_poll_group_000" 00:13:19.255 } 00:13:19.255 ]' 00:13:19.255 16:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:19.255 16:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:19.255 16:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:19.255 16:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:19.255 16:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:19.255 16:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.255 16:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.255 16:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.513 16:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:02:YjVhNjVjYzAzOWM0ZWU3N2FjN2ViODMwZDlhZGVjOGI1YTg4MTE1NjIxZTBhZTQzoDY2NA==: --dhchap-ctrl-secret DHHC-1:01:Y2U5YzFjZDgwMWFkNTc5Zjk4NzgzYzg5ZmIzY2JmMzPEv3z1: 00:13:20.078 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.078 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:20.078 16:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.078 16:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.078 16:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.078 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:20.078 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:20.078 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:20.336 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:13:20.336 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:20.336 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:20.336 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:20.336 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:20.336 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.336 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key3 00:13:20.336 16:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.336 16:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.336 16:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.336 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:20.336 16:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:20.903 00:13:20.903 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:20.903 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:20.903 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.161 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.161 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.161 16:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.161 16:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.161 16:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.161 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:21.161 { 00:13:21.161 "auth": { 00:13:21.161 "dhgroup": "ffdhe8192", 00:13:21.161 "digest": "sha384", 00:13:21.161 "state": "completed" 00:13:21.161 }, 00:13:21.161 "cntlid": 95, 00:13:21.161 "listen_address": { 00:13:21.161 "adrfam": "IPv4", 00:13:21.161 "traddr": "10.0.0.2", 00:13:21.161 "trsvcid": "4420", 00:13:21.161 "trtype": "TCP" 00:13:21.161 }, 00:13:21.161 "peer_address": { 00:13:21.161 "adrfam": "IPv4", 00:13:21.161 "traddr": "10.0.0.1", 00:13:21.161 "trsvcid": "38236", 00:13:21.161 "trtype": "TCP" 00:13:21.161 }, 00:13:21.161 "qid": 0, 00:13:21.161 "state": "enabled", 00:13:21.161 "thread": "nvmf_tgt_poll_group_000" 00:13:21.161 } 00:13:21.161 ]' 00:13:21.161 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:21.161 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:21.161 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:21.419 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:21.419 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:21.419 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.419 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.419 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.677 16:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:03:NGY0ZDQ5ZDgxNmE5MmFjMDU3ZDVlMTMyY2I1Y2E0NTk5NThmMmRmZDI3YWY2ODk1Y2JmNDc3YTI5YmFlYTU0M3NAE8c=: 00:13:22.243 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.243 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:22.243 16:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.243 16:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.243 16:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.243 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:22.243 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:22.243 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:22.243 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:22.243 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:22.500 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:13:22.500 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:22.500 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:22.500 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:22.500 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:22.500 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.500 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:22.500 16:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.500 16:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.500 16:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.500 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:22.500 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:22.757 00:13:22.757 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:22.757 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.757 16:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:23.014 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.015 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.015 16:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.015 16:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.015 16:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.015 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:23.015 { 00:13:23.015 "auth": { 00:13:23.015 "dhgroup": "null", 00:13:23.015 "digest": "sha512", 00:13:23.015 "state": "completed" 00:13:23.015 }, 00:13:23.015 "cntlid": 97, 00:13:23.015 "listen_address": { 00:13:23.015 "adrfam": "IPv4", 00:13:23.015 "traddr": "10.0.0.2", 00:13:23.015 "trsvcid": "4420", 00:13:23.015 "trtype": "TCP" 00:13:23.015 }, 00:13:23.015 "peer_address": { 00:13:23.015 "adrfam": "IPv4", 00:13:23.015 "traddr": "10.0.0.1", 00:13:23.015 "trsvcid": "38260", 00:13:23.015 "trtype": "TCP" 00:13:23.015 }, 00:13:23.015 "qid": 0, 00:13:23.015 "state": "enabled", 00:13:23.015 "thread": "nvmf_tgt_poll_group_000" 00:13:23.015 } 00:13:23.015 ]' 00:13:23.015 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:23.015 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:23.015 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:23.015 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:23.015 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:23.015 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.015 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.015 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.580 16:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:00:YmMxOTE2ZGJmY2E3MzljYjIxYmI0MTZlMGYzZTBiYzZiMjMzN2M2ZmE3YzJlZTgz9JREJw==: --dhchap-ctrl-secret DHHC-1:03:OTQ1MGQ4ODRkOTkyMjFmN2QzMjJlMGM4OWU0YmEyMTA3MDMwZmEzNTg1ZTJhNjJhYmY1OWIyYTE0OTJjMjEwYxrxhaM=: 00:13:23.838 16:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.838 16:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:23.838 16:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.838 16:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.096 16:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.096 16:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:24.096 16:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:24.097 16:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:24.355 16:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:13:24.355 16:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:24.355 16:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:24.355 16:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:24.355 16:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:24.355 16:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.355 16:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:24.355 16:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.355 16:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.355 16:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.355 16:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:24.355 16:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:24.613 00:13:24.613 16:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:24.613 16:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:24.613 16:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.872 16:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.872 16:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.872 16:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.872 16:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.872 16:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.872 16:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:24.872 { 00:13:24.872 "auth": { 00:13:24.872 "dhgroup": "null", 00:13:24.872 "digest": "sha512", 00:13:24.872 "state": "completed" 00:13:24.872 }, 00:13:24.872 "cntlid": 99, 00:13:24.872 "listen_address": { 00:13:24.872 "adrfam": "IPv4", 00:13:24.872 "traddr": "10.0.0.2", 00:13:24.872 "trsvcid": "4420", 00:13:24.872 "trtype": "TCP" 00:13:24.872 }, 00:13:24.872 "peer_address": { 00:13:24.872 "adrfam": "IPv4", 00:13:24.872 "traddr": "10.0.0.1", 00:13:24.872 "trsvcid": "38290", 00:13:24.872 "trtype": "TCP" 00:13:24.872 }, 00:13:24.872 "qid": 0, 00:13:24.872 "state": "enabled", 00:13:24.872 "thread": "nvmf_tgt_poll_group_000" 00:13:24.872 } 00:13:24.872 ]' 00:13:24.872 16:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:24.872 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:24.872 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:24.872 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:24.872 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:25.130 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.130 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.130 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.387 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:01:MDljODhiMGM5MDdkZDA5NTUxYTc5ZWNlYjU3MmJlZmaPL6kQ: --dhchap-ctrl-secret DHHC-1:02:YTRjYzkwNGMxZTU2NjNmMzM2NTgwMDk5OTUyZmU0NDUzNWE5ZDgyZDYyY2E3OTIyAd/Y0g==: 00:13:25.951 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.951 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:25.951 16:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.951 16:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.951 16:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.951 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:25.951 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:25.951 16:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:26.209 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:13:26.209 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:26.209 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:26.209 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:26.209 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:26.209 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.209 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:26.209 16:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.209 16:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.209 16:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.209 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:26.209 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:26.467 00:13:26.467 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:26.467 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:26.467 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.732 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.732 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.732 16:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.732 16:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.732 16:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.732 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:26.732 { 00:13:26.732 "auth": { 00:13:26.732 "dhgroup": "null", 00:13:26.732 "digest": "sha512", 00:13:26.732 "state": "completed" 00:13:26.732 }, 00:13:26.732 "cntlid": 101, 00:13:26.732 "listen_address": { 00:13:26.732 "adrfam": "IPv4", 00:13:26.732 "traddr": "10.0.0.2", 00:13:26.732 "trsvcid": "4420", 00:13:26.732 "trtype": "TCP" 00:13:26.732 }, 00:13:26.732 "peer_address": { 00:13:26.732 "adrfam": "IPv4", 00:13:26.732 "traddr": "10.0.0.1", 00:13:26.732 "trsvcid": "38306", 00:13:26.732 "trtype": "TCP" 00:13:26.732 }, 00:13:26.732 "qid": 0, 00:13:26.732 "state": "enabled", 00:13:26.732 "thread": "nvmf_tgt_poll_group_000" 00:13:26.732 } 00:13:26.732 ]' 00:13:26.732 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:26.732 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:26.732 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:26.732 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:26.732 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:26.732 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.732 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.732 16:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.016 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:02:YjVhNjVjYzAzOWM0ZWU3N2FjN2ViODMwZDlhZGVjOGI1YTg4MTE1NjIxZTBhZTQzoDY2NA==: --dhchap-ctrl-secret DHHC-1:01:Y2U5YzFjZDgwMWFkNTc5Zjk4NzgzYzg5ZmIzY2JmMzPEv3z1: 00:13:27.588 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.588 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:27.588 16:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.588 16:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.588 16:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.588 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:27.588 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:27.588 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:27.846 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:13:27.846 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:27.846 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:27.846 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:27.846 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:27.846 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.846 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key3 00:13:27.846 16:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.846 16:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.846 16:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.846 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:27.846 16:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:28.104 00:13:28.104 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:28.104 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:28.104 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.362 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.362 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.362 16:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.362 16:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.362 16:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.362 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:28.362 { 00:13:28.362 "auth": { 00:13:28.362 "dhgroup": "null", 00:13:28.362 "digest": "sha512", 00:13:28.362 "state": "completed" 00:13:28.362 }, 00:13:28.362 "cntlid": 103, 00:13:28.362 "listen_address": { 00:13:28.362 "adrfam": "IPv4", 00:13:28.362 "traddr": "10.0.0.2", 00:13:28.362 "trsvcid": "4420", 00:13:28.362 "trtype": "TCP" 00:13:28.362 }, 00:13:28.362 "peer_address": { 00:13:28.362 "adrfam": "IPv4", 00:13:28.362 "traddr": "10.0.0.1", 00:13:28.362 "trsvcid": "38316", 00:13:28.362 "trtype": "TCP" 00:13:28.362 }, 00:13:28.362 "qid": 0, 00:13:28.362 "state": "enabled", 00:13:28.362 "thread": "nvmf_tgt_poll_group_000" 00:13:28.362 } 00:13:28.362 ]' 00:13:28.362 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:28.362 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:28.362 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:28.362 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:28.362 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:28.618 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.618 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.618 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.875 16:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:03:NGY0ZDQ5ZDgxNmE5MmFjMDU3ZDVlMTMyY2I1Y2E0NTk5NThmMmRmZDI3YWY2ODk1Y2JmNDc3YTI5YmFlYTU0M3NAE8c=: 00:13:29.440 16:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.440 16:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:29.440 16:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.440 16:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.440 16:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.440 16:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:29.440 16:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:29.440 16:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:29.440 16:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:29.440 16:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:13:29.440 16:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:29.440 16:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:29.440 16:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:29.440 16:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:29.440 16:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.440 16:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.440 16:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.440 16:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.440 16:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.440 16:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.440 16:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.004 00:13:30.004 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:30.004 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:30.004 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.261 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.261 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.261 16:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.261 16:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.261 16:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.261 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:30.261 { 00:13:30.261 "auth": { 00:13:30.261 "dhgroup": "ffdhe2048", 00:13:30.261 "digest": "sha512", 00:13:30.261 "state": "completed" 00:13:30.261 }, 00:13:30.261 "cntlid": 105, 00:13:30.261 "listen_address": { 00:13:30.261 "adrfam": "IPv4", 00:13:30.261 "traddr": "10.0.0.2", 00:13:30.261 "trsvcid": "4420", 00:13:30.261 "trtype": "TCP" 00:13:30.261 }, 00:13:30.261 "peer_address": { 00:13:30.261 "adrfam": "IPv4", 00:13:30.261 "traddr": "10.0.0.1", 00:13:30.261 "trsvcid": "33314", 00:13:30.261 "trtype": "TCP" 00:13:30.261 }, 00:13:30.261 "qid": 0, 00:13:30.261 "state": "enabled", 00:13:30.261 "thread": "nvmf_tgt_poll_group_000" 00:13:30.261 } 00:13:30.261 ]' 00:13:30.261 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:30.261 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:30.261 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:30.261 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:30.261 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:30.261 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.261 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.261 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.519 16:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:00:YmMxOTE2ZGJmY2E3MzljYjIxYmI0MTZlMGYzZTBiYzZiMjMzN2M2ZmE3YzJlZTgz9JREJw==: --dhchap-ctrl-secret DHHC-1:03:OTQ1MGQ4ODRkOTkyMjFmN2QzMjJlMGM4OWU0YmEyMTA3MDMwZmEzNTg1ZTJhNjJhYmY1OWIyYTE0OTJjMjEwYxrxhaM=: 00:13:31.451 16:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.451 16:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:31.451 16:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.451 16:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.451 16:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.451 16:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:31.451 16:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:31.451 16:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:31.451 16:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:13:31.451 16:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:31.451 16:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:31.451 16:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:31.451 16:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:31.451 16:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.451 16:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.451 16:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.451 16:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.451 16:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.451 16:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.451 16:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.017 00:13:32.017 16:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:32.017 16:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.017 16:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:32.287 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.287 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.287 16:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.287 16:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.287 16:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.287 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:32.287 { 00:13:32.287 "auth": { 00:13:32.287 "dhgroup": "ffdhe2048", 00:13:32.287 "digest": "sha512", 00:13:32.287 "state": "completed" 00:13:32.287 }, 00:13:32.287 "cntlid": 107, 00:13:32.287 "listen_address": { 00:13:32.287 "adrfam": "IPv4", 00:13:32.287 "traddr": "10.0.0.2", 00:13:32.287 "trsvcid": "4420", 00:13:32.287 "trtype": "TCP" 00:13:32.287 }, 00:13:32.287 "peer_address": { 00:13:32.287 "adrfam": "IPv4", 00:13:32.287 "traddr": "10.0.0.1", 00:13:32.287 "trsvcid": "33344", 00:13:32.287 "trtype": "TCP" 00:13:32.287 }, 00:13:32.287 "qid": 0, 00:13:32.287 "state": "enabled", 00:13:32.287 "thread": "nvmf_tgt_poll_group_000" 00:13:32.287 } 00:13:32.287 ]' 00:13:32.287 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:32.287 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:32.287 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:32.287 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:32.287 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:32.287 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.287 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.287 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.547 16:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:01:MDljODhiMGM5MDdkZDA5NTUxYTc5ZWNlYjU3MmJlZmaPL6kQ: --dhchap-ctrl-secret DHHC-1:02:YTRjYzkwNGMxZTU2NjNmMzM2NTgwMDk5OTUyZmU0NDUzNWE5ZDgyZDYyY2E3OTIyAd/Y0g==: 00:13:33.489 16:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.489 16:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:33.489 16:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.489 16:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.489 16:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.489 16:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:33.489 16:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:33.489 16:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:33.489 16:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:13:33.489 16:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:33.489 16:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:33.489 16:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:33.489 16:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:33.489 16:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.490 16:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.490 16:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.490 16:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.490 16:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.490 16:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.490 16:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.055 00:13:34.055 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:34.055 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.055 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:34.055 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.055 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.313 16:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.313 16:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.313 16:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.313 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:34.313 { 00:13:34.313 "auth": { 00:13:34.313 "dhgroup": "ffdhe2048", 00:13:34.313 "digest": "sha512", 00:13:34.313 "state": "completed" 00:13:34.313 }, 00:13:34.313 "cntlid": 109, 00:13:34.313 "listen_address": { 00:13:34.313 "adrfam": "IPv4", 00:13:34.313 "traddr": "10.0.0.2", 00:13:34.313 "trsvcid": "4420", 00:13:34.313 "trtype": "TCP" 00:13:34.313 }, 00:13:34.313 "peer_address": { 00:13:34.313 "adrfam": "IPv4", 00:13:34.313 "traddr": "10.0.0.1", 00:13:34.313 "trsvcid": "33368", 00:13:34.313 "trtype": "TCP" 00:13:34.313 }, 00:13:34.313 "qid": 0, 00:13:34.313 "state": "enabled", 00:13:34.313 "thread": "nvmf_tgt_poll_group_000" 00:13:34.313 } 00:13:34.313 ]' 00:13:34.313 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:34.313 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:34.313 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:34.313 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:34.313 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:34.313 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.313 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.313 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.571 16:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:02:YjVhNjVjYzAzOWM0ZWU3N2FjN2ViODMwZDlhZGVjOGI1YTg4MTE1NjIxZTBhZTQzoDY2NA==: --dhchap-ctrl-secret DHHC-1:01:Y2U5YzFjZDgwMWFkNTc5Zjk4NzgzYzg5ZmIzY2JmMzPEv3z1: 00:13:35.134 16:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.134 16:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:35.134 16:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.134 16:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.135 16:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.135 16:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:35.135 16:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:35.135 16:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:35.392 16:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:13:35.392 16:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:35.392 16:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:35.392 16:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:35.392 16:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:35.392 16:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.392 16:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key3 00:13:35.392 16:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.392 16:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.392 16:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.392 16:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:35.392 16:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:35.649 00:13:35.649 16:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:35.649 16:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:35.649 16:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.907 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.907 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.907 16:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.907 16:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.907 16:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.907 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:35.907 { 00:13:35.907 "auth": { 00:13:35.907 "dhgroup": "ffdhe2048", 00:13:35.907 "digest": "sha512", 00:13:35.907 "state": "completed" 00:13:35.907 }, 00:13:35.907 "cntlid": 111, 00:13:35.907 "listen_address": { 00:13:35.907 "adrfam": "IPv4", 00:13:35.907 "traddr": "10.0.0.2", 00:13:35.907 "trsvcid": "4420", 00:13:35.907 "trtype": "TCP" 00:13:35.907 }, 00:13:35.907 "peer_address": { 00:13:35.907 "adrfam": "IPv4", 00:13:35.907 "traddr": "10.0.0.1", 00:13:35.907 "trsvcid": "33394", 00:13:35.907 "trtype": "TCP" 00:13:35.907 }, 00:13:35.907 "qid": 0, 00:13:35.907 "state": "enabled", 00:13:35.907 "thread": "nvmf_tgt_poll_group_000" 00:13:35.907 } 00:13:35.907 ]' 00:13:35.907 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:36.165 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:36.165 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:36.165 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:36.165 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:36.165 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:36.165 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:36.165 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.423 16:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:03:NGY0ZDQ5ZDgxNmE5MmFjMDU3ZDVlMTMyY2I1Y2E0NTk5NThmMmRmZDI3YWY2ODk1Y2JmNDc3YTI5YmFlYTU0M3NAE8c=: 00:13:36.988 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.988 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:36.988 16:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.988 16:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.988 16:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.988 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:36.988 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:36.988 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:36.988 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:37.246 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:13:37.246 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:37.246 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:37.246 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:37.246 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:37.246 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.246 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.246 16:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.246 16:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.246 16:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.246 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.246 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.504 00:13:37.504 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:37.504 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:37.504 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.762 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.762 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.762 16:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.762 16:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.762 16:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.762 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:37.762 { 00:13:37.762 "auth": { 00:13:37.762 "dhgroup": "ffdhe3072", 00:13:37.762 "digest": "sha512", 00:13:37.762 "state": "completed" 00:13:37.762 }, 00:13:37.762 "cntlid": 113, 00:13:37.762 "listen_address": { 00:13:37.762 "adrfam": "IPv4", 00:13:37.762 "traddr": "10.0.0.2", 00:13:37.762 "trsvcid": "4420", 00:13:37.762 "trtype": "TCP" 00:13:37.762 }, 00:13:37.762 "peer_address": { 00:13:37.762 "adrfam": "IPv4", 00:13:37.762 "traddr": "10.0.0.1", 00:13:37.762 "trsvcid": "33418", 00:13:37.762 "trtype": "TCP" 00:13:37.762 }, 00:13:37.762 "qid": 0, 00:13:37.762 "state": "enabled", 00:13:37.762 "thread": "nvmf_tgt_poll_group_000" 00:13:37.762 } 00:13:37.762 ]' 00:13:37.762 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:37.762 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:37.762 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:38.020 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:38.020 16:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:38.021 16:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.021 16:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.021 16:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.279 16:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:00:YmMxOTE2ZGJmY2E3MzljYjIxYmI0MTZlMGYzZTBiYzZiMjMzN2M2ZmE3YzJlZTgz9JREJw==: --dhchap-ctrl-secret DHHC-1:03:OTQ1MGQ4ODRkOTkyMjFmN2QzMjJlMGM4OWU0YmEyMTA3MDMwZmEzNTg1ZTJhNjJhYmY1OWIyYTE0OTJjMjEwYxrxhaM=: 00:13:38.846 16:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.846 16:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:38.846 16:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.846 16:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.846 16:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.846 16:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:38.846 16:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:38.846 16:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:39.104 16:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:13:39.104 16:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:39.104 16:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:39.104 16:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:39.104 16:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:39.104 16:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.105 16:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.105 16:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.105 16:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.105 16:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.105 16:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.105 16:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.672 00:13:39.672 16:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:39.672 16:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.672 16:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:39.672 16:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.672 16:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.672 16:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.672 16:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.672 16:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.672 16:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:39.672 { 00:13:39.672 "auth": { 00:13:39.672 "dhgroup": "ffdhe3072", 00:13:39.672 "digest": "sha512", 00:13:39.672 "state": "completed" 00:13:39.672 }, 00:13:39.672 "cntlid": 115, 00:13:39.672 "listen_address": { 00:13:39.672 "adrfam": "IPv4", 00:13:39.672 "traddr": "10.0.0.2", 00:13:39.672 "trsvcid": "4420", 00:13:39.672 "trtype": "TCP" 00:13:39.672 }, 00:13:39.672 "peer_address": { 00:13:39.672 "adrfam": "IPv4", 00:13:39.672 "traddr": "10.0.0.1", 00:13:39.672 "trsvcid": "41940", 00:13:39.672 "trtype": "TCP" 00:13:39.672 }, 00:13:39.672 "qid": 0, 00:13:39.672 "state": "enabled", 00:13:39.672 "thread": "nvmf_tgt_poll_group_000" 00:13:39.672 } 00:13:39.672 ]' 00:13:39.672 16:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:39.931 16:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:39.931 16:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:39.931 16:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:39.931 16:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:39.931 16:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.931 16:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.931 16:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.199 16:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:01:MDljODhiMGM5MDdkZDA5NTUxYTc5ZWNlYjU3MmJlZmaPL6kQ: --dhchap-ctrl-secret DHHC-1:02:YTRjYzkwNGMxZTU2NjNmMzM2NTgwMDk5OTUyZmU0NDUzNWE5ZDgyZDYyY2E3OTIyAd/Y0g==: 00:13:40.776 16:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.776 16:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:40.776 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.776 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.776 16:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.776 16:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:40.776 16:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:40.776 16:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:41.035 16:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:13:41.035 16:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:41.035 16:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:41.035 16:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:41.035 16:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:41.035 16:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.035 16:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.035 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.035 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.035 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.035 16:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.035 16:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.295 00:13:41.295 16:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:41.295 16:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:41.295 16:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.555 16:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.555 16:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.555 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.555 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.555 16:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.555 16:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:41.555 { 00:13:41.555 "auth": { 00:13:41.555 "dhgroup": "ffdhe3072", 00:13:41.555 "digest": "sha512", 00:13:41.555 "state": "completed" 00:13:41.555 }, 00:13:41.555 "cntlid": 117, 00:13:41.555 "listen_address": { 00:13:41.555 "adrfam": "IPv4", 00:13:41.555 "traddr": "10.0.0.2", 00:13:41.555 "trsvcid": "4420", 00:13:41.555 "trtype": "TCP" 00:13:41.555 }, 00:13:41.555 "peer_address": { 00:13:41.555 "adrfam": "IPv4", 00:13:41.555 "traddr": "10.0.0.1", 00:13:41.555 "trsvcid": "41960", 00:13:41.555 "trtype": "TCP" 00:13:41.555 }, 00:13:41.555 "qid": 0, 00:13:41.555 "state": "enabled", 00:13:41.555 "thread": "nvmf_tgt_poll_group_000" 00:13:41.555 } 00:13:41.555 ]' 00:13:41.555 16:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:41.555 16:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:41.555 16:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:41.555 16:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:41.555 16:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:41.814 16:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.814 16:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.814 16:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.073 16:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:02:YjVhNjVjYzAzOWM0ZWU3N2FjN2ViODMwZDlhZGVjOGI1YTg4MTE1NjIxZTBhZTQzoDY2NA==: --dhchap-ctrl-secret DHHC-1:01:Y2U5YzFjZDgwMWFkNTc5Zjk4NzgzYzg5ZmIzY2JmMzPEv3z1: 00:13:42.641 16:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.641 16:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:42.641 16:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.641 16:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.641 16:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.641 16:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:42.641 16:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:42.641 16:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:42.901 16:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:13:42.901 16:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:42.901 16:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:42.901 16:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:42.901 16:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:42.901 16:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.901 16:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key3 00:13:42.901 16:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.901 16:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.901 16:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.901 16:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:42.901 16:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:43.160 00:13:43.160 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:43.160 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:43.160 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.419 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.419 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.419 16:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.419 16:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.419 16:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.419 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:43.419 { 00:13:43.419 "auth": { 00:13:43.419 "dhgroup": "ffdhe3072", 00:13:43.419 "digest": "sha512", 00:13:43.419 "state": "completed" 00:13:43.419 }, 00:13:43.419 "cntlid": 119, 00:13:43.419 "listen_address": { 00:13:43.419 "adrfam": "IPv4", 00:13:43.419 "traddr": "10.0.0.2", 00:13:43.419 "trsvcid": "4420", 00:13:43.419 "trtype": "TCP" 00:13:43.419 }, 00:13:43.419 "peer_address": { 00:13:43.419 "adrfam": "IPv4", 00:13:43.419 "traddr": "10.0.0.1", 00:13:43.419 "trsvcid": "41990", 00:13:43.419 "trtype": "TCP" 00:13:43.419 }, 00:13:43.419 "qid": 0, 00:13:43.419 "state": "enabled", 00:13:43.419 "thread": "nvmf_tgt_poll_group_000" 00:13:43.419 } 00:13:43.419 ]' 00:13:43.419 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:43.419 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:43.419 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:43.419 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:43.419 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:43.679 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.679 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.679 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:43.940 16:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:03:NGY0ZDQ5ZDgxNmE5MmFjMDU3ZDVlMTMyY2I1Y2E0NTk5NThmMmRmZDI3YWY2ODk1Y2JmNDc3YTI5YmFlYTU0M3NAE8c=: 00:13:44.506 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.506 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:44.506 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.506 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.506 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.506 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:44.506 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:44.506 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:44.506 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:44.764 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:13:44.764 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:44.764 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:44.764 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:44.764 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:44.764 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.764 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.764 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.764 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.764 16:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.764 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.764 16:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.022 00:13:45.022 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:45.022 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.022 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:45.279 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.279 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.279 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.279 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.537 16:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.537 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:45.537 { 00:13:45.537 "auth": { 00:13:45.537 "dhgroup": "ffdhe4096", 00:13:45.537 "digest": "sha512", 00:13:45.537 "state": "completed" 00:13:45.537 }, 00:13:45.537 "cntlid": 121, 00:13:45.537 "listen_address": { 00:13:45.537 "adrfam": "IPv4", 00:13:45.537 "traddr": "10.0.0.2", 00:13:45.537 "trsvcid": "4420", 00:13:45.537 "trtype": "TCP" 00:13:45.537 }, 00:13:45.537 "peer_address": { 00:13:45.537 "adrfam": "IPv4", 00:13:45.537 "traddr": "10.0.0.1", 00:13:45.537 "trsvcid": "42014", 00:13:45.537 "trtype": "TCP" 00:13:45.537 }, 00:13:45.537 "qid": 0, 00:13:45.537 "state": "enabled", 00:13:45.537 "thread": "nvmf_tgt_poll_group_000" 00:13:45.537 } 00:13:45.537 ]' 00:13:45.537 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:45.537 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:45.537 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:45.537 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:45.537 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:45.537 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.537 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.537 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.796 16:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:00:YmMxOTE2ZGJmY2E3MzljYjIxYmI0MTZlMGYzZTBiYzZiMjMzN2M2ZmE3YzJlZTgz9JREJw==: --dhchap-ctrl-secret DHHC-1:03:OTQ1MGQ4ODRkOTkyMjFmN2QzMjJlMGM4OWU0YmEyMTA3MDMwZmEzNTg1ZTJhNjJhYmY1OWIyYTE0OTJjMjEwYxrxhaM=: 00:13:46.361 16:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.361 16:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:46.361 16:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.361 16:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.361 16:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.361 16:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:46.361 16:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:46.361 16:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:46.618 16:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:13:46.619 16:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:46.619 16:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:46.619 16:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:46.619 16:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:46.619 16:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.619 16:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:46.619 16:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.619 16:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.619 16:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.619 16:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:46.619 16:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.183 00:13:47.183 16:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:47.183 16:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.183 16:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:47.440 16:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.440 16:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.440 16:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.440 16:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.440 16:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.440 16:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:47.440 { 00:13:47.440 "auth": { 00:13:47.441 "dhgroup": "ffdhe4096", 00:13:47.441 "digest": "sha512", 00:13:47.441 "state": "completed" 00:13:47.441 }, 00:13:47.441 "cntlid": 123, 00:13:47.441 "listen_address": { 00:13:47.441 "adrfam": "IPv4", 00:13:47.441 "traddr": "10.0.0.2", 00:13:47.441 "trsvcid": "4420", 00:13:47.441 "trtype": "TCP" 00:13:47.441 }, 00:13:47.441 "peer_address": { 00:13:47.441 "adrfam": "IPv4", 00:13:47.441 "traddr": "10.0.0.1", 00:13:47.441 "trsvcid": "42050", 00:13:47.441 "trtype": "TCP" 00:13:47.441 }, 00:13:47.441 "qid": 0, 00:13:47.441 "state": "enabled", 00:13:47.441 "thread": "nvmf_tgt_poll_group_000" 00:13:47.441 } 00:13:47.441 ]' 00:13:47.441 16:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:47.441 16:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:47.441 16:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:47.441 16:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:47.441 16:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:47.441 16:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.441 16:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.441 16:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.698 16:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:01:MDljODhiMGM5MDdkZDA5NTUxYTc5ZWNlYjU3MmJlZmaPL6kQ: --dhchap-ctrl-secret DHHC-1:02:YTRjYzkwNGMxZTU2NjNmMzM2NTgwMDk5OTUyZmU0NDUzNWE5ZDgyZDYyY2E3OTIyAd/Y0g==: 00:13:48.633 16:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.633 16:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:48.633 16:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.633 16:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.633 16:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.633 16:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:48.633 16:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:48.633 16:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:48.633 16:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:13:48.633 16:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:48.633 16:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:48.633 16:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:48.633 16:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:48.633 16:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.633 16:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:48.633 16:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.633 16:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.633 16:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.633 16:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:48.633 16:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:49.199 00:13:49.199 16:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:49.199 16:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.199 16:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:49.458 16:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.458 16:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.458 16:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.458 16:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.458 16:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.458 16:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:49.458 { 00:13:49.458 "auth": { 00:13:49.458 "dhgroup": "ffdhe4096", 00:13:49.458 "digest": "sha512", 00:13:49.458 "state": "completed" 00:13:49.458 }, 00:13:49.458 "cntlid": 125, 00:13:49.458 "listen_address": { 00:13:49.458 "adrfam": "IPv4", 00:13:49.458 "traddr": "10.0.0.2", 00:13:49.458 "trsvcid": "4420", 00:13:49.458 "trtype": "TCP" 00:13:49.458 }, 00:13:49.458 "peer_address": { 00:13:49.458 "adrfam": "IPv4", 00:13:49.458 "traddr": "10.0.0.1", 00:13:49.458 "trsvcid": "34116", 00:13:49.458 "trtype": "TCP" 00:13:49.458 }, 00:13:49.458 "qid": 0, 00:13:49.458 "state": "enabled", 00:13:49.458 "thread": "nvmf_tgt_poll_group_000" 00:13:49.458 } 00:13:49.458 ]' 00:13:49.458 16:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:49.458 16:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:49.458 16:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:49.458 16:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:49.458 16:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:49.458 16:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.458 16:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.458 16:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.716 16:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:02:YjVhNjVjYzAzOWM0ZWU3N2FjN2ViODMwZDlhZGVjOGI1YTg4MTE1NjIxZTBhZTQzoDY2NA==: --dhchap-ctrl-secret DHHC-1:01:Y2U5YzFjZDgwMWFkNTc5Zjk4NzgzYzg5ZmIzY2JmMzPEv3z1: 00:13:50.648 16:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.648 16:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:50.648 16:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.648 16:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.648 16:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.648 16:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:50.648 16:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:50.648 16:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:50.648 16:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:13:50.648 16:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:50.648 16:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:50.648 16:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:50.648 16:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:50.648 16:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.648 16:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key3 00:13:50.648 16:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.648 16:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.648 16:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.648 16:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:50.648 16:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:50.906 00:13:50.906 16:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:50.906 16:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.906 16:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:51.471 16:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.471 16:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.471 16:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.471 16:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.471 16:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.471 16:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:51.471 { 00:13:51.471 "auth": { 00:13:51.471 "dhgroup": "ffdhe4096", 00:13:51.471 "digest": "sha512", 00:13:51.471 "state": "completed" 00:13:51.471 }, 00:13:51.471 "cntlid": 127, 00:13:51.471 "listen_address": { 00:13:51.471 "adrfam": "IPv4", 00:13:51.471 "traddr": "10.0.0.2", 00:13:51.471 "trsvcid": "4420", 00:13:51.471 "trtype": "TCP" 00:13:51.471 }, 00:13:51.471 "peer_address": { 00:13:51.471 "adrfam": "IPv4", 00:13:51.471 "traddr": "10.0.0.1", 00:13:51.471 "trsvcid": "34136", 00:13:51.471 "trtype": "TCP" 00:13:51.471 }, 00:13:51.471 "qid": 0, 00:13:51.471 "state": "enabled", 00:13:51.471 "thread": "nvmf_tgt_poll_group_000" 00:13:51.471 } 00:13:51.471 ]' 00:13:51.471 16:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:51.471 16:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:51.471 16:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:51.471 16:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:51.471 16:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:51.471 16:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.471 16:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.471 16:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.729 16:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:03:NGY0ZDQ5ZDgxNmE5MmFjMDU3ZDVlMTMyY2I1Y2E0NTk5NThmMmRmZDI3YWY2ODk1Y2JmNDc3YTI5YmFlYTU0M3NAE8c=: 00:13:52.662 16:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.662 16:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:52.662 16:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.662 16:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.662 16:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.662 16:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:52.662 16:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:52.662 16:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:52.662 16:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:52.662 16:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:13:52.662 16:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:52.662 16:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:52.662 16:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:52.662 16:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:52.662 16:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.662 16:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:52.662 16:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.662 16:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.662 16:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.662 16:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:52.662 16:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:53.229 00:13:53.229 16:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:53.229 16:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.229 16:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:53.488 16:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.488 16:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.488 16:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.488 16:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.488 16:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.488 16:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:53.488 { 00:13:53.488 "auth": { 00:13:53.488 "dhgroup": "ffdhe6144", 00:13:53.488 "digest": "sha512", 00:13:53.488 "state": "completed" 00:13:53.488 }, 00:13:53.488 "cntlid": 129, 00:13:53.488 "listen_address": { 00:13:53.488 "adrfam": "IPv4", 00:13:53.488 "traddr": "10.0.0.2", 00:13:53.488 "trsvcid": "4420", 00:13:53.488 "trtype": "TCP" 00:13:53.488 }, 00:13:53.488 "peer_address": { 00:13:53.488 "adrfam": "IPv4", 00:13:53.488 "traddr": "10.0.0.1", 00:13:53.488 "trsvcid": "34160", 00:13:53.488 "trtype": "TCP" 00:13:53.488 }, 00:13:53.488 "qid": 0, 00:13:53.488 "state": "enabled", 00:13:53.488 "thread": "nvmf_tgt_poll_group_000" 00:13:53.488 } 00:13:53.488 ]' 00:13:53.488 16:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:53.488 16:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:53.488 16:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:53.488 16:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:53.488 16:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:53.488 16:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.488 16:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.488 16:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.746 16:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:00:YmMxOTE2ZGJmY2E3MzljYjIxYmI0MTZlMGYzZTBiYzZiMjMzN2M2ZmE3YzJlZTgz9JREJw==: --dhchap-ctrl-secret DHHC-1:03:OTQ1MGQ4ODRkOTkyMjFmN2QzMjJlMGM4OWU0YmEyMTA3MDMwZmEzNTg1ZTJhNjJhYmY1OWIyYTE0OTJjMjEwYxrxhaM=: 00:13:54.310 16:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.310 16:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:54.310 16:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.310 16:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.310 16:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.310 16:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:54.310 16:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:54.310 16:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:54.567 16:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:13:54.567 16:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:54.567 16:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:54.567 16:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:54.567 16:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:54.567 16:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.567 16:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.567 16:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.567 16:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.567 16:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.567 16:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.567 16:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:55.132 00:13:55.132 16:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:55.132 16:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:55.132 16:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.388 16:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.388 16:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.388 16:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.388 16:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.388 16:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.388 16:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:55.388 { 00:13:55.388 "auth": { 00:13:55.388 "dhgroup": "ffdhe6144", 00:13:55.388 "digest": "sha512", 00:13:55.388 "state": "completed" 00:13:55.388 }, 00:13:55.388 "cntlid": 131, 00:13:55.388 "listen_address": { 00:13:55.388 "adrfam": "IPv4", 00:13:55.388 "traddr": "10.0.0.2", 00:13:55.388 "trsvcid": "4420", 00:13:55.388 "trtype": "TCP" 00:13:55.388 }, 00:13:55.388 "peer_address": { 00:13:55.388 "adrfam": "IPv4", 00:13:55.388 "traddr": "10.0.0.1", 00:13:55.388 "trsvcid": "34196", 00:13:55.388 "trtype": "TCP" 00:13:55.388 }, 00:13:55.388 "qid": 0, 00:13:55.388 "state": "enabled", 00:13:55.388 "thread": "nvmf_tgt_poll_group_000" 00:13:55.388 } 00:13:55.388 ]' 00:13:55.388 16:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:55.388 16:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:55.388 16:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:55.388 16:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:55.388 16:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:55.388 16:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.388 16:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.388 16:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.646 16:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:01:MDljODhiMGM5MDdkZDA5NTUxYTc5ZWNlYjU3MmJlZmaPL6kQ: --dhchap-ctrl-secret DHHC-1:02:YTRjYzkwNGMxZTU2NjNmMzM2NTgwMDk5OTUyZmU0NDUzNWE5ZDgyZDYyY2E3OTIyAd/Y0g==: 00:13:56.576 16:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.576 16:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:56.576 16:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.576 16:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.576 16:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.576 16:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:56.576 16:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:56.576 16:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:56.576 16:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:13:56.576 16:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:56.576 16:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:56.576 16:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:56.576 16:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:56.576 16:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.576 16:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:56.576 16:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.576 16:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.576 16:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.576 16:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:56.576 16:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.141 00:13:57.141 16:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:57.141 16:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:57.141 16:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.399 16:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.399 16:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.399 16:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.399 16:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.399 16:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.399 16:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:57.399 { 00:13:57.399 "auth": { 00:13:57.399 "dhgroup": "ffdhe6144", 00:13:57.399 "digest": "sha512", 00:13:57.399 "state": "completed" 00:13:57.399 }, 00:13:57.399 "cntlid": 133, 00:13:57.399 "listen_address": { 00:13:57.399 "adrfam": "IPv4", 00:13:57.399 "traddr": "10.0.0.2", 00:13:57.399 "trsvcid": "4420", 00:13:57.399 "trtype": "TCP" 00:13:57.399 }, 00:13:57.399 "peer_address": { 00:13:57.399 "adrfam": "IPv4", 00:13:57.399 "traddr": "10.0.0.1", 00:13:57.399 "trsvcid": "34230", 00:13:57.399 "trtype": "TCP" 00:13:57.399 }, 00:13:57.399 "qid": 0, 00:13:57.399 "state": "enabled", 00:13:57.399 "thread": "nvmf_tgt_poll_group_000" 00:13:57.399 } 00:13:57.399 ]' 00:13:57.399 16:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:57.399 16:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:57.399 16:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:57.399 16:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:57.399 16:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:57.658 16:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.658 16:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.658 16:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.931 16:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:02:YjVhNjVjYzAzOWM0ZWU3N2FjN2ViODMwZDlhZGVjOGI1YTg4MTE1NjIxZTBhZTQzoDY2NA==: --dhchap-ctrl-secret DHHC-1:01:Y2U5YzFjZDgwMWFkNTc5Zjk4NzgzYzg5ZmIzY2JmMzPEv3z1: 00:13:58.497 16:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.497 16:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:13:58.497 16:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.497 16:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.497 16:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.497 16:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:58.497 16:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:58.497 16:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:58.756 16:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:13:58.756 16:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:58.756 16:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:58.756 16:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:58.756 16:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:58.756 16:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.756 16:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key3 00:13:58.756 16:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.756 16:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.756 16:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.756 16:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:58.756 16:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:59.013 00:13:59.013 16:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:59.013 16:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:59.013 16:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.272 16:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.272 16:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.272 16:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.272 16:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.272 16:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.530 16:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:59.530 { 00:13:59.530 "auth": { 00:13:59.530 "dhgroup": "ffdhe6144", 00:13:59.530 "digest": "sha512", 00:13:59.530 "state": "completed" 00:13:59.530 }, 00:13:59.530 "cntlid": 135, 00:13:59.530 "listen_address": { 00:13:59.530 "adrfam": "IPv4", 00:13:59.530 "traddr": "10.0.0.2", 00:13:59.530 "trsvcid": "4420", 00:13:59.530 "trtype": "TCP" 00:13:59.530 }, 00:13:59.530 "peer_address": { 00:13:59.530 "adrfam": "IPv4", 00:13:59.530 "traddr": "10.0.0.1", 00:13:59.530 "trsvcid": "60692", 00:13:59.530 "trtype": "TCP" 00:13:59.530 }, 00:13:59.530 "qid": 0, 00:13:59.530 "state": "enabled", 00:13:59.530 "thread": "nvmf_tgt_poll_group_000" 00:13:59.530 } 00:13:59.530 ]' 00:13:59.530 16:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:59.530 16:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:59.530 16:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:59.531 16:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:59.531 16:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:59.531 16:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.531 16:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.531 16:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.789 16:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:03:NGY0ZDQ5ZDgxNmE5MmFjMDU3ZDVlMTMyY2I1Y2E0NTk5NThmMmRmZDI3YWY2ODk1Y2JmNDc3YTI5YmFlYTU0M3NAE8c=: 00:14:00.355 16:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.355 16:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:14:00.355 16:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.355 16:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.355 16:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.355 16:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:00.355 16:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:00.355 16:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:00.355 16:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:00.614 16:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:14:00.614 16:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:00.614 16:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:00.614 16:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:00.614 16:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:00.614 16:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.614 16:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:00.614 16:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.614 16:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.614 16:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.614 16:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:00.614 16:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.188 00:14:01.188 16:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:01.188 16:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.188 16:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:01.452 16:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.452 16:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.452 16:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.452 16:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.452 16:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.452 16:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:01.452 { 00:14:01.452 "auth": { 00:14:01.452 "dhgroup": "ffdhe8192", 00:14:01.452 "digest": "sha512", 00:14:01.452 "state": "completed" 00:14:01.452 }, 00:14:01.452 "cntlid": 137, 00:14:01.452 "listen_address": { 00:14:01.452 "adrfam": "IPv4", 00:14:01.452 "traddr": "10.0.0.2", 00:14:01.452 "trsvcid": "4420", 00:14:01.452 "trtype": "TCP" 00:14:01.452 }, 00:14:01.452 "peer_address": { 00:14:01.452 "adrfam": "IPv4", 00:14:01.452 "traddr": "10.0.0.1", 00:14:01.452 "trsvcid": "60728", 00:14:01.452 "trtype": "TCP" 00:14:01.452 }, 00:14:01.452 "qid": 0, 00:14:01.452 "state": "enabled", 00:14:01.452 "thread": "nvmf_tgt_poll_group_000" 00:14:01.452 } 00:14:01.452 ]' 00:14:01.452 16:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:01.452 16:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:01.452 16:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:01.452 16:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:01.452 16:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:01.452 16:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.452 16:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.452 16:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.710 16:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:00:YmMxOTE2ZGJmY2E3MzljYjIxYmI0MTZlMGYzZTBiYzZiMjMzN2M2ZmE3YzJlZTgz9JREJw==: --dhchap-ctrl-secret DHHC-1:03:OTQ1MGQ4ODRkOTkyMjFmN2QzMjJlMGM4OWU0YmEyMTA3MDMwZmEzNTg1ZTJhNjJhYmY1OWIyYTE0OTJjMjEwYxrxhaM=: 00:14:02.274 16:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.274 16:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:14:02.274 16:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.274 16:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.274 16:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.274 16:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:02.274 16:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:02.274 16:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:02.532 16:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:14:02.533 16:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:02.533 16:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:02.533 16:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:02.533 16:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:02.533 16:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.533 16:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:02.533 16:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.533 16:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.533 16:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.533 16:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:02.533 16:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.100 00:14:03.100 16:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:03.100 16:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:03.100 16:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.666 16:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.666 16:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.666 16:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.666 16:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.666 16:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.666 16:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:03.666 { 00:14:03.666 "auth": { 00:14:03.666 "dhgroup": "ffdhe8192", 00:14:03.666 "digest": "sha512", 00:14:03.666 "state": "completed" 00:14:03.666 }, 00:14:03.666 "cntlid": 139, 00:14:03.666 "listen_address": { 00:14:03.666 "adrfam": "IPv4", 00:14:03.666 "traddr": "10.0.0.2", 00:14:03.666 "trsvcid": "4420", 00:14:03.666 "trtype": "TCP" 00:14:03.666 }, 00:14:03.666 "peer_address": { 00:14:03.666 "adrfam": "IPv4", 00:14:03.666 "traddr": "10.0.0.1", 00:14:03.666 "trsvcid": "60762", 00:14:03.666 "trtype": "TCP" 00:14:03.666 }, 00:14:03.666 "qid": 0, 00:14:03.666 "state": "enabled", 00:14:03.666 "thread": "nvmf_tgt_poll_group_000" 00:14:03.666 } 00:14:03.666 ]' 00:14:03.666 16:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:03.666 16:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:03.666 16:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:03.666 16:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:03.666 16:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:03.666 16:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.666 16:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.666 16:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.924 16:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:01:MDljODhiMGM5MDdkZDA5NTUxYTc5ZWNlYjU3MmJlZmaPL6kQ: --dhchap-ctrl-secret DHHC-1:02:YTRjYzkwNGMxZTU2NjNmMzM2NTgwMDk5OTUyZmU0NDUzNWE5ZDgyZDYyY2E3OTIyAd/Y0g==: 00:14:04.490 16:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.490 16:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:14:04.490 16:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.490 16:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.490 16:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.490 16:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:04.490 16:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:04.490 16:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:04.747 16:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:14:04.748 16:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:04.748 16:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:04.748 16:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:04.748 16:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:04.748 16:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.748 16:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:04.748 16:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.748 16:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.748 16:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.748 16:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:04.748 16:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:05.312 00:14:05.312 16:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:05.312 16:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.312 16:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:05.570 16:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.570 16:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.570 16:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.570 16:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.570 16:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.570 16:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:05.570 { 00:14:05.571 "auth": { 00:14:05.571 "dhgroup": "ffdhe8192", 00:14:05.571 "digest": "sha512", 00:14:05.571 "state": "completed" 00:14:05.571 }, 00:14:05.571 "cntlid": 141, 00:14:05.571 "listen_address": { 00:14:05.571 "adrfam": "IPv4", 00:14:05.571 "traddr": "10.0.0.2", 00:14:05.571 "trsvcid": "4420", 00:14:05.571 "trtype": "TCP" 00:14:05.571 }, 00:14:05.571 "peer_address": { 00:14:05.571 "adrfam": "IPv4", 00:14:05.571 "traddr": "10.0.0.1", 00:14:05.571 "trsvcid": "60792", 00:14:05.571 "trtype": "TCP" 00:14:05.571 }, 00:14:05.571 "qid": 0, 00:14:05.571 "state": "enabled", 00:14:05.571 "thread": "nvmf_tgt_poll_group_000" 00:14:05.571 } 00:14:05.571 ]' 00:14:05.571 16:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:05.571 16:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:05.571 16:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:05.829 16:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:05.829 16:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:05.829 16:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.829 16:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.829 16:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.087 16:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:02:YjVhNjVjYzAzOWM0ZWU3N2FjN2ViODMwZDlhZGVjOGI1YTg4MTE1NjIxZTBhZTQzoDY2NA==: --dhchap-ctrl-secret DHHC-1:01:Y2U5YzFjZDgwMWFkNTc5Zjk4NzgzYzg5ZmIzY2JmMzPEv3z1: 00:14:06.652 16:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.652 16:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:14:06.652 16:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.652 16:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.652 16:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.652 16:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:06.652 16:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:06.652 16:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:06.912 16:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:14:06.912 16:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:06.912 16:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:06.912 16:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:06.912 16:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:06.912 16:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.912 16:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key3 00:14:06.912 16:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.912 16:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.912 16:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.912 16:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:06.912 16:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:07.479 00:14:07.479 16:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:07.479 16:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.479 16:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:07.738 16:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.738 16:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.738 16:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.738 16:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.738 16:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.738 16:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:07.738 { 00:14:07.738 "auth": { 00:14:07.738 "dhgroup": "ffdhe8192", 00:14:07.738 "digest": "sha512", 00:14:07.738 "state": "completed" 00:14:07.738 }, 00:14:07.738 "cntlid": 143, 00:14:07.738 "listen_address": { 00:14:07.738 "adrfam": "IPv4", 00:14:07.738 "traddr": "10.0.0.2", 00:14:07.738 "trsvcid": "4420", 00:14:07.738 "trtype": "TCP" 00:14:07.738 }, 00:14:07.738 "peer_address": { 00:14:07.738 "adrfam": "IPv4", 00:14:07.738 "traddr": "10.0.0.1", 00:14:07.738 "trsvcid": "60824", 00:14:07.738 "trtype": "TCP" 00:14:07.738 }, 00:14:07.738 "qid": 0, 00:14:07.738 "state": "enabled", 00:14:07.738 "thread": "nvmf_tgt_poll_group_000" 00:14:07.738 } 00:14:07.738 ]' 00:14:07.738 16:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:07.738 16:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:07.738 16:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:07.738 16:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:07.738 16:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:07.738 16:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.738 16:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.738 16:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.997 16:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:03:NGY0ZDQ5ZDgxNmE5MmFjMDU3ZDVlMTMyY2I1Y2E0NTk5NThmMmRmZDI3YWY2ODk1Y2JmNDc3YTI5YmFlYTU0M3NAE8c=: 00:14:08.564 16:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.564 16:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:14:08.564 16:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.564 16:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.564 16:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.564 16:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:14:08.564 16:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:14:08.564 16:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:14:08.564 16:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:08.564 16:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:08.564 16:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:08.823 16:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:14:08.823 16:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:08.823 16:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:08.823 16:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:08.823 16:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:08.823 16:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.823 16:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:08.823 16:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.823 16:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.823 16:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.823 16:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:08.823 16:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:09.390 00:14:09.390 16:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:09.390 16:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.390 16:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:09.648 16:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.648 16:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.648 16:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.648 16:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.648 16:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.648 16:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:09.648 { 00:14:09.648 "auth": { 00:14:09.648 "dhgroup": "ffdhe8192", 00:14:09.648 "digest": "sha512", 00:14:09.648 "state": "completed" 00:14:09.648 }, 00:14:09.648 "cntlid": 145, 00:14:09.648 "listen_address": { 00:14:09.648 "adrfam": "IPv4", 00:14:09.648 "traddr": "10.0.0.2", 00:14:09.648 "trsvcid": "4420", 00:14:09.648 "trtype": "TCP" 00:14:09.648 }, 00:14:09.648 "peer_address": { 00:14:09.648 "adrfam": "IPv4", 00:14:09.648 "traddr": "10.0.0.1", 00:14:09.648 "trsvcid": "51264", 00:14:09.648 "trtype": "TCP" 00:14:09.648 }, 00:14:09.648 "qid": 0, 00:14:09.648 "state": "enabled", 00:14:09.648 "thread": "nvmf_tgt_poll_group_000" 00:14:09.648 } 00:14:09.648 ]' 00:14:09.648 16:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:09.648 16:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:09.648 16:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:09.906 16:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:09.906 16:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:09.906 16:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.906 16:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.906 16:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.164 16:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:00:YmMxOTE2ZGJmY2E3MzljYjIxYmI0MTZlMGYzZTBiYzZiMjMzN2M2ZmE3YzJlZTgz9JREJw==: --dhchap-ctrl-secret DHHC-1:03:OTQ1MGQ4ODRkOTkyMjFmN2QzMjJlMGM4OWU0YmEyMTA3MDMwZmEzNTg1ZTJhNjJhYmY1OWIyYTE0OTJjMjEwYxrxhaM=: 00:14:10.731 16:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.731 16:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:14:10.731 16:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.731 16:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.731 16:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.731 16:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key1 00:14:10.731 16:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.731 16:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.731 16:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.731 16:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:10.731 16:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:10.731 16:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:10.731 16:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:10.731 16:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:10.731 16:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:10.731 16:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:10.731 16:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:10.731 16:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:11.297 2024/07/21 16:29:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:11.297 request: 00:14:11.297 { 00:14:11.297 "method": "bdev_nvme_attach_controller", 00:14:11.297 "params": { 00:14:11.297 "name": "nvme0", 00:14:11.297 "trtype": "tcp", 00:14:11.297 "traddr": "10.0.0.2", 00:14:11.297 "adrfam": "ipv4", 00:14:11.297 "trsvcid": "4420", 00:14:11.297 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:11.297 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f", 00:14:11.297 "prchk_reftag": false, 00:14:11.297 "prchk_guard": false, 00:14:11.297 "hdgst": false, 00:14:11.297 "ddgst": false, 00:14:11.297 "dhchap_key": "key2" 00:14:11.297 } 00:14:11.297 } 00:14:11.297 Got JSON-RPC error response 00:14:11.297 GoRPCClient: error on JSON-RPC call 00:14:11.297 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:11.297 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:11.297 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:11.297 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:11.297 16:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:14:11.297 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.297 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.297 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.297 16:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:11.297 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.297 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.297 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.297 16:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:11.297 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:11.297 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:11.297 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:11.297 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:11.297 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:11.297 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:11.297 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:11.297 16:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:11.862 2024/07/21 16:29:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:11.862 request: 00:14:11.862 { 00:14:11.862 "method": "bdev_nvme_attach_controller", 00:14:11.862 "params": { 00:14:11.862 "name": "nvme0", 00:14:11.863 "trtype": "tcp", 00:14:11.863 "traddr": "10.0.0.2", 00:14:11.863 "adrfam": "ipv4", 00:14:11.863 "trsvcid": "4420", 00:14:11.863 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:11.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f", 00:14:11.863 "prchk_reftag": false, 00:14:11.863 "prchk_guard": false, 00:14:11.863 "hdgst": false, 00:14:11.863 "ddgst": false, 00:14:11.863 "dhchap_key": "key1", 00:14:11.863 "dhchap_ctrlr_key": "ckey2" 00:14:11.863 } 00:14:11.863 } 00:14:11.863 Got JSON-RPC error response 00:14:11.863 GoRPCClient: error on JSON-RPC call 00:14:11.863 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:11.863 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:11.863 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:11.863 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:11.863 16:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:14:11.863 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.863 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.863 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.863 16:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key1 00:14:11.863 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.863 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.863 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.863 16:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:11.863 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:11.863 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:11.863 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:11.863 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:11.863 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:11.863 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:11.863 16:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:11.863 16:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.133 2024/07/21 16:29:30 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:12.133 request: 00:14:12.133 { 00:14:12.133 "method": "bdev_nvme_attach_controller", 00:14:12.133 "params": { 00:14:12.133 "name": "nvme0", 00:14:12.133 "trtype": "tcp", 00:14:12.133 "traddr": "10.0.0.2", 00:14:12.133 "adrfam": "ipv4", 00:14:12.133 "trsvcid": "4420", 00:14:12.133 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:12.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f", 00:14:12.133 "prchk_reftag": false, 00:14:12.133 "prchk_guard": false, 00:14:12.133 "hdgst": false, 00:14:12.133 "ddgst": false, 00:14:12.133 "dhchap_key": "key1", 00:14:12.133 "dhchap_ctrlr_key": "ckey1" 00:14:12.133 } 00:14:12.133 } 00:14:12.133 Got JSON-RPC error response 00:14:12.133 GoRPCClient: error on JSON-RPC call 00:14:12.403 16:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:12.403 16:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:12.403 16:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:12.403 16:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:12.403 16:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:14:12.403 16:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.403 16:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.403 16:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.403 16:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 78154 00:14:12.403 16:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 78154 ']' 00:14:12.403 16:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 78154 00:14:12.403 16:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:14:12.403 16:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:12.403 16:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78154 00:14:12.403 16:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:12.403 16:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:12.403 killing process with pid 78154 00:14:12.403 16:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78154' 00:14:12.403 16:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 78154 00:14:12.403 16:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 78154 00:14:12.661 16:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:14:12.661 16:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:12.661 16:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:12.661 16:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.661 16:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=82919 00:14:12.661 16:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:14:12.661 16:29:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 82919 00:14:12.661 16:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 82919 ']' 00:14:12.661 16:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.661 16:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:12.661 16:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.661 16:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:12.661 16:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.595 16:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:13.595 16:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:13.595 16:29:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:13.595 16:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:13.595 16:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.595 16:29:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.595 16:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:13.595 16:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 82919 00:14:13.595 16:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 82919 ']' 00:14:13.595 16:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.595 16:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:13.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.595 16:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.595 16:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:13.595 16:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.853 16:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:13.853 16:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:13.853 16:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:14:13.853 16:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.853 16:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.112 16:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.112 16:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:14:14.112 16:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:14.112 16:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:14.112 16:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:14.112 16:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:14.112 16:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.112 16:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key3 00:14:14.112 16:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.112 16:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.112 16:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.112 16:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:14.112 16:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:14.679 00:14:14.679 16:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:14.679 16:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:14.679 16:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.937 16:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.937 16:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.937 16:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.937 16:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.937 16:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.937 16:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:14.937 { 00:14:14.937 "auth": { 00:14:14.937 "dhgroup": "ffdhe8192", 00:14:14.937 "digest": "sha512", 00:14:14.937 "state": "completed" 00:14:14.937 }, 00:14:14.937 "cntlid": 1, 00:14:14.937 "listen_address": { 00:14:14.937 "adrfam": "IPv4", 00:14:14.937 "traddr": "10.0.0.2", 00:14:14.937 "trsvcid": "4420", 00:14:14.937 "trtype": "TCP" 00:14:14.937 }, 00:14:14.937 "peer_address": { 00:14:14.937 "adrfam": "IPv4", 00:14:14.937 "traddr": "10.0.0.1", 00:14:14.937 "trsvcid": "51310", 00:14:14.937 "trtype": "TCP" 00:14:14.937 }, 00:14:14.937 "qid": 0, 00:14:14.937 "state": "enabled", 00:14:14.937 "thread": "nvmf_tgt_poll_group_000" 00:14:14.937 } 00:14:14.937 ]' 00:14:14.937 16:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:14.937 16:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:14.937 16:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:14.937 16:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:14.937 16:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:15.194 16:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.194 16:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.194 16:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.452 16:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid 93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-secret DHHC-1:03:NGY0ZDQ5ZDgxNmE5MmFjMDU3ZDVlMTMyY2I1Y2E0NTk5NThmMmRmZDI3YWY2ODk1Y2JmNDc3YTI5YmFlYTU0M3NAE8c=: 00:14:16.019 16:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.019 16:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:14:16.019 16:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.019 16:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.019 16:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.019 16:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --dhchap-key key3 00:14:16.019 16:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.019 16:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.019 16:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.019 16:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:14:16.019 16:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:14:16.019 16:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:16.019 16:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:16.019 16:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:16.019 16:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:16.019 16:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:16.019 16:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:16.019 16:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:16.019 16:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:16.019 16:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:16.278 2024/07/21 16:29:34 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:16.278 request: 00:14:16.278 { 00:14:16.278 "method": "bdev_nvme_attach_controller", 00:14:16.278 "params": { 00:14:16.278 "name": "nvme0", 00:14:16.278 "trtype": "tcp", 00:14:16.278 "traddr": "10.0.0.2", 00:14:16.278 "adrfam": "ipv4", 00:14:16.278 "trsvcid": "4420", 00:14:16.278 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:16.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f", 00:14:16.278 "prchk_reftag": false, 00:14:16.278 "prchk_guard": false, 00:14:16.278 "hdgst": false, 00:14:16.278 "ddgst": false, 00:14:16.278 "dhchap_key": "key3" 00:14:16.278 } 00:14:16.278 } 00:14:16.278 Got JSON-RPC error response 00:14:16.278 GoRPCClient: error on JSON-RPC call 00:14:16.278 16:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:16.278 16:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:16.278 16:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:16.278 16:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:16.278 16:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:14:16.278 16:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:14:16.278 16:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:16.278 16:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:16.537 16:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:16.537 16:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:16.537 16:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:16.537 16:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:16.537 16:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:16.537 16:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:16.537 16:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:16.537 16:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:16.537 16:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:16.796 2024/07/21 16:29:34 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:16.796 request: 00:14:16.796 { 00:14:16.796 "method": "bdev_nvme_attach_controller", 00:14:16.796 "params": { 00:14:16.796 "name": "nvme0", 00:14:16.796 "trtype": "tcp", 00:14:16.796 "traddr": "10.0.0.2", 00:14:16.796 "adrfam": "ipv4", 00:14:16.796 "trsvcid": "4420", 00:14:16.796 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:16.796 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f", 00:14:16.796 "prchk_reftag": false, 00:14:16.796 "prchk_guard": false, 00:14:16.796 "hdgst": false, 00:14:16.796 "ddgst": false, 00:14:16.796 "dhchap_key": "key3" 00:14:16.796 } 00:14:16.796 } 00:14:16.796 Got JSON-RPC error response 00:14:16.796 GoRPCClient: error on JSON-RPC call 00:14:16.796 16:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:16.796 16:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:16.796 16:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:16.796 16:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:16.796 16:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:14:16.796 16:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:14:16.796 16:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:14:16.796 16:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:16.796 16:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:16.796 16:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:17.055 16:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:14:17.055 16:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.055 16:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.055 16:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.055 16:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:14:17.055 16:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.055 16:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.055 16:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.055 16:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:17.055 16:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:17.055 16:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:17.055 16:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:17.055 16:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:17.055 16:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:17.055 16:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:17.055 16:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:17.055 16:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:17.314 2024/07/21 16:29:35 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:17.314 request: 00:14:17.314 { 00:14:17.314 "method": "bdev_nvme_attach_controller", 00:14:17.314 "params": { 00:14:17.314 "name": "nvme0", 00:14:17.314 "trtype": "tcp", 00:14:17.314 "traddr": "10.0.0.2", 00:14:17.314 "adrfam": "ipv4", 00:14:17.314 "trsvcid": "4420", 00:14:17.314 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:17.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f", 00:14:17.314 "prchk_reftag": false, 00:14:17.314 "prchk_guard": false, 00:14:17.314 "hdgst": false, 00:14:17.314 "ddgst": false, 00:14:17.314 "dhchap_key": "key0", 00:14:17.314 "dhchap_ctrlr_key": "key1" 00:14:17.314 } 00:14:17.314 } 00:14:17.314 Got JSON-RPC error response 00:14:17.314 GoRPCClient: error on JSON-RPC call 00:14:17.314 16:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:17.314 16:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:17.314 16:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:17.314 16:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:17.314 16:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:17.314 16:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:17.573 00:14:17.573 16:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:14:17.573 16:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:14:17.573 16:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.832 16:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.832 16:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.832 16:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.399 16:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:14:18.399 16:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:14:18.399 16:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 78198 00:14:18.399 16:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 78198 ']' 00:14:18.399 16:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 78198 00:14:18.399 16:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:14:18.399 16:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:18.399 16:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78198 00:14:18.399 16:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:18.399 16:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:18.399 killing process with pid 78198 00:14:18.399 16:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78198' 00:14:18.399 16:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 78198 00:14:18.399 16:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 78198 00:14:18.658 16:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:18.658 16:29:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:18.658 16:29:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:14:18.658 16:29:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:18.658 16:29:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:14:18.658 16:29:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:18.658 16:29:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:18.658 rmmod nvme_tcp 00:14:18.658 rmmod nvme_fabrics 00:14:18.658 rmmod nvme_keyring 00:14:18.658 16:29:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:18.658 16:29:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:14:18.658 16:29:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:14:18.658 16:29:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 82919 ']' 00:14:18.658 16:29:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 82919 00:14:18.658 16:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 82919 ']' 00:14:18.658 16:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 82919 00:14:18.658 16:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:14:18.658 16:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:18.658 16:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82919 00:14:18.658 16:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:18.658 16:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:18.658 killing process with pid 82919 00:14:18.658 16:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82919' 00:14:18.658 16:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 82919 00:14:18.658 16:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 82919 00:14:18.916 16:29:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:18.916 16:29:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:18.916 16:29:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:18.916 16:29:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:18.916 16:29:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:18.916 16:29:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.916 16:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:18.916 16:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.174 16:29:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:19.174 16:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.ILp /tmp/spdk.key-sha256.HWP /tmp/spdk.key-sha384.uoP /tmp/spdk.key-sha512.TNZ /tmp/spdk.key-sha512.qaB /tmp/spdk.key-sha384.ynC /tmp/spdk.key-sha256.fDA '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:19.174 00:14:19.174 real 2m38.588s 00:14:19.174 user 6m24.628s 00:14:19.174 sys 0m21.211s 00:14:19.174 16:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:19.174 ************************************ 00:14:19.174 END TEST nvmf_auth_target 00:14:19.174 ************************************ 00:14:19.174 16:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.174 16:29:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:19.174 16:29:37 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:14:19.174 16:29:37 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:19.174 16:29:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:19.174 16:29:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:19.174 16:29:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:19.174 ************************************ 00:14:19.174 START TEST nvmf_bdevio_no_huge 00:14:19.174 ************************************ 00:14:19.174 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:19.174 * Looking for test storage... 00:14:19.174 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:19.174 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:19.174 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:19.174 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.174 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:19.175 Cannot find device "nvmf_tgt_br" 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:19.175 Cannot find device "nvmf_tgt_br2" 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:19.175 Cannot find device "nvmf_tgt_br" 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:19.175 Cannot find device "nvmf_tgt_br2" 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:14:19.175 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:19.433 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:19.433 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:19.433 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:19.433 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:19.433 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:19.433 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:19.433 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:19.433 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:19.433 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:19.433 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:19.433 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:19.433 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:19.433 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:19.433 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:19.433 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:19.433 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:19.433 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:19.433 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:19.433 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:19.433 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:19.433 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:19.433 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:19.433 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:19.433 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:19.433 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:19.433 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:19.433 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:19.434 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:19.691 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:19.691 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:19.691 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:19.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:19.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:14:19.691 00:14:19.691 --- 10.0.0.2 ping statistics --- 00:14:19.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.691 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:19.691 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:19.691 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:19.691 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:14:19.691 00:14:19.691 --- 10.0.0.3 ping statistics --- 00:14:19.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.691 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:19.691 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:19.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:19.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:14:19.691 00:14:19.691 --- 10.0.0.1 ping statistics --- 00:14:19.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.691 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:14:19.691 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:19.691 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:14:19.691 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:19.691 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:19.691 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:19.691 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:19.691 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:19.691 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:19.691 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:19.691 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:19.691 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:19.691 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:19.691 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:19.691 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=83330 00:14:19.691 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:19.691 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 83330 00:14:19.691 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 83330 ']' 00:14:19.691 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.691 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:19.691 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.691 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:19.691 16:29:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:19.691 [2024-07-21 16:29:37.745487] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:14:19.691 [2024-07-21 16:29:37.745573] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:19.691 [2024-07-21 16:29:37.884836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:19.953 [2024-07-21 16:29:38.029340] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.953 [2024-07-21 16:29:38.029402] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.953 [2024-07-21 16:29:38.029416] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:19.953 [2024-07-21 16:29:38.029427] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:19.953 [2024-07-21 16:29:38.029437] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.953 [2024-07-21 16:29:38.030234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:19.953 [2024-07-21 16:29:38.030361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:19.953 [2024-07-21 16:29:38.030487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:19.953 [2024-07-21 16:29:38.030498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:20.524 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:20.524 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:14:20.524 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:20.524 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:20.524 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:20.524 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.524 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:20.524 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.524 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:20.524 [2024-07-21 16:29:38.712553] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.524 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.524 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:20.524 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.524 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:20.524 Malloc0 00:14:20.781 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.781 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:20.781 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.781 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:20.781 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.781 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:20.781 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.781 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:20.781 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.781 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:20.781 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.781 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:20.782 [2024-07-21 16:29:38.751247] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.782 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.782 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:20.782 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:20.782 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:14:20.782 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:14:20.782 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:20.782 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:20.782 { 00:14:20.782 "params": { 00:14:20.782 "name": "Nvme$subsystem", 00:14:20.782 "trtype": "$TEST_TRANSPORT", 00:14:20.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:20.782 "adrfam": "ipv4", 00:14:20.782 "trsvcid": "$NVMF_PORT", 00:14:20.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:20.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:20.782 "hdgst": ${hdgst:-false}, 00:14:20.782 "ddgst": ${ddgst:-false} 00:14:20.782 }, 00:14:20.782 "method": "bdev_nvme_attach_controller" 00:14:20.782 } 00:14:20.782 EOF 00:14:20.782 )") 00:14:20.782 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:14:20.782 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:14:20.782 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:14:20.782 16:29:38 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:20.782 "params": { 00:14:20.782 "name": "Nvme1", 00:14:20.782 "trtype": "tcp", 00:14:20.782 "traddr": "10.0.0.2", 00:14:20.782 "adrfam": "ipv4", 00:14:20.782 "trsvcid": "4420", 00:14:20.782 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.782 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:20.782 "hdgst": false, 00:14:20.782 "ddgst": false 00:14:20.782 }, 00:14:20.782 "method": "bdev_nvme_attach_controller" 00:14:20.782 }' 00:14:20.782 [2024-07-21 16:29:38.812053] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:14:20.782 [2024-07-21 16:29:38.812150] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid83384 ] 00:14:20.782 [2024-07-21 16:29:38.958401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:21.039 [2024-07-21 16:29:39.084099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.039 [2024-07-21 16:29:39.084254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.039 [2024-07-21 16:29:39.084276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.296 I/O targets: 00:14:21.296 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:21.296 00:14:21.296 00:14:21.296 CUnit - A unit testing framework for C - Version 2.1-3 00:14:21.296 http://cunit.sourceforge.net/ 00:14:21.296 00:14:21.296 00:14:21.296 Suite: bdevio tests on: Nvme1n1 00:14:21.296 Test: blockdev write read block ...passed 00:14:21.296 Test: blockdev write zeroes read block ...passed 00:14:21.296 Test: blockdev write zeroes read no split ...passed 00:14:21.296 Test: blockdev write zeroes read split ...passed 00:14:21.296 Test: blockdev write zeroes read split partial ...passed 00:14:21.296 Test: blockdev reset ...[2024-07-21 16:29:39.389305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:21.296 [2024-07-21 16:29:39.389410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2367460 (9): Bad file descriptor 00:14:21.296 [2024-07-21 16:29:39.403639] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:21.296 passed 00:14:21.296 Test: blockdev write read 8 blocks ...passed 00:14:21.296 Test: blockdev write read size > 128k ...passed 00:14:21.296 Test: blockdev write read invalid size ...passed 00:14:21.296 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:21.296 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:21.296 Test: blockdev write read max offset ...passed 00:14:21.585 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:21.585 Test: blockdev writev readv 8 blocks ...passed 00:14:21.585 Test: blockdev writev readv 30 x 1block ...passed 00:14:21.585 Test: blockdev writev readv block ...passed 00:14:21.585 Test: blockdev writev readv size > 128k ...passed 00:14:21.585 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:21.585 Test: blockdev comparev and writev ...[2024-07-21 16:29:39.578035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.585 [2024-07-21 16:29:39.578127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:21.585 [2024-07-21 16:29:39.578148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.585 [2024-07-21 16:29:39.578159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:21.585 [2024-07-21 16:29:39.578764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.585 [2024-07-21 16:29:39.578820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:21.585 [2024-07-21 16:29:39.578836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.585 [2024-07-21 16:29:39.578846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:21.585 [2024-07-21 16:29:39.579322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.585 [2024-07-21 16:29:39.579349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:21.585 [2024-07-21 16:29:39.579366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.585 [2024-07-21 16:29:39.579376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:21.585 [2024-07-21 16:29:39.579989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.585 [2024-07-21 16:29:39.580032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:21.585 [2024-07-21 16:29:39.580064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:21.585 [2024-07-21 16:29:39.580074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:21.585 passed 00:14:21.585 Test: blockdev nvme passthru rw ...passed 00:14:21.585 Test: blockdev nvme passthru vendor specific ...[2024-07-21 16:29:39.663715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:21.585 [2024-07-21 16:29:39.663740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:21.585 [2024-07-21 16:29:39.664018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:21.585 [2024-07-21 16:29:39.664039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:21.585 [2024-07-21 16:29:39.664229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:21.585 [2024-07-21 16:29:39.664255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:21.585 [2024-07-21 16:29:39.664457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:21.585 [2024-07-21 16:29:39.664483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:21.585 passed 00:14:21.585 Test: blockdev nvme admin passthru ...passed 00:14:21.585 Test: blockdev copy ...passed 00:14:21.585 00:14:21.585 Run Summary: Type Total Ran Passed Failed Inactive 00:14:21.585 suites 1 1 n/a 0 0 00:14:21.585 tests 23 23 23 0 0 00:14:21.585 asserts 152 152 152 0 n/a 00:14:21.585 00:14:21.585 Elapsed time = 0.939 seconds 00:14:22.149 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:22.149 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.149 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:22.149 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.149 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:22.149 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:22.149 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:22.149 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:14:22.149 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:22.149 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:14:22.149 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:22.149 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:22.149 rmmod nvme_tcp 00:14:22.149 rmmod nvme_fabrics 00:14:22.149 rmmod nvme_keyring 00:14:22.149 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:22.149 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:14:22.149 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:14:22.149 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 83330 ']' 00:14:22.149 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 83330 00:14:22.149 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 83330 ']' 00:14:22.149 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 83330 00:14:22.149 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:14:22.149 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:22.149 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83330 00:14:22.149 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:14:22.149 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:14:22.149 killing process with pid 83330 00:14:22.149 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83330' 00:14:22.149 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 83330 00:14:22.149 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 83330 00:14:22.716 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:22.716 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:22.716 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:22.716 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:22.716 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:22.716 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.716 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.716 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.716 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:22.716 ************************************ 00:14:22.716 END TEST nvmf_bdevio_no_huge 00:14:22.716 ************************************ 00:14:22.716 00:14:22.716 real 0m3.548s 00:14:22.716 user 0m12.822s 00:14:22.716 sys 0m1.309s 00:14:22.716 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:22.716 16:29:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:22.716 16:29:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:22.716 16:29:40 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:22.716 16:29:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:22.716 16:29:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:22.716 16:29:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:22.716 ************************************ 00:14:22.716 START TEST nvmf_tls 00:14:22.716 ************************************ 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:22.716 * Looking for test storage... 00:14:22.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.716 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:22.717 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:22.717 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:22.717 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:22.717 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:22.717 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:22.717 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:22.717 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:22.717 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:22.717 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:22.717 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:22.717 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:22.717 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:22.717 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:22.717 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:22.717 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:22.717 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:22.717 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:22.717 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:22.975 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:22.975 Cannot find device "nvmf_tgt_br" 00:14:22.975 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:14:22.975 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:22.975 Cannot find device "nvmf_tgt_br2" 00:14:22.975 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:14:22.975 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:22.975 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:22.975 Cannot find device "nvmf_tgt_br" 00:14:22.975 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:14:22.975 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:22.975 Cannot find device "nvmf_tgt_br2" 00:14:22.975 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:14:22.975 16:29:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:22.975 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:22.975 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:22.975 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:22.975 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:22.975 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:22.975 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:22.975 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:22.975 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:22.975 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:22.975 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:22.975 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:22.975 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:22.975 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:22.975 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:22.975 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:22.975 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:22.975 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:22.975 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:22.975 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:22.975 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:22.975 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:22.975 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:22.975 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:22.975 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:22.975 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:22.975 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:23.234 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:23.234 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:23.234 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:23.234 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:23.234 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:23.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:23.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:14:23.234 00:14:23.234 --- 10.0.0.2 ping statistics --- 00:14:23.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.234 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:14:23.234 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:23.234 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:23.234 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:14:23.234 00:14:23.234 --- 10.0.0.3 ping statistics --- 00:14:23.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.234 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:14:23.234 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:23.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:23.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:14:23.234 00:14:23.234 --- 10.0.0.1 ping statistics --- 00:14:23.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.234 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:14:23.234 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:23.234 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:14:23.234 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:23.234 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:23.234 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:23.234 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:23.234 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:23.234 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:23.234 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:23.234 16:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:23.234 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:23.234 16:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:23.234 16:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.234 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83564 00:14:23.234 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83564 00:14:23.234 16:29:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:23.234 16:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83564 ']' 00:14:23.234 16:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.234 16:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:23.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.234 16:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.234 16:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:23.234 16:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.234 [2024-07-21 16:29:41.325364] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:14:23.234 [2024-07-21 16:29:41.325453] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.493 [2024-07-21 16:29:41.468562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.493 [2024-07-21 16:29:41.579711] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.493 [2024-07-21 16:29:41.579757] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.493 [2024-07-21 16:29:41.579767] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.493 [2024-07-21 16:29:41.579775] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.493 [2024-07-21 16:29:41.579781] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.493 [2024-07-21 16:29:41.579808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.425 16:29:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:24.425 16:29:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:24.425 16:29:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:24.425 16:29:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:24.425 16:29:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:24.425 16:29:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:24.425 16:29:42 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:14:24.425 16:29:42 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:24.683 true 00:14:24.683 16:29:42 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:24.683 16:29:42 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:14:24.941 16:29:42 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:14:24.941 16:29:42 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:14:24.941 16:29:42 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:24.941 16:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:24.941 16:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:14:25.200 16:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:14:25.200 16:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:14:25.200 16:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:25.458 16:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:25.458 16:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:14:25.716 16:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:14:25.716 16:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:14:25.716 16:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:14:25.716 16:29:43 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:25.974 16:29:44 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:14:25.974 16:29:44 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:14:25.974 16:29:44 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:26.232 16:29:44 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:26.232 16:29:44 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:14:26.490 16:29:44 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:14:26.490 16:29:44 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:14:26.490 16:29:44 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:26.747 16:29:44 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:14:26.747 16:29:44 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:27.005 16:29:45 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:14:27.005 16:29:45 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:14:27.005 16:29:45 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:27.005 16:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:27.005 16:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:27.005 16:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:27.005 16:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:14:27.005 16:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:14:27.005 16:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:27.005 16:29:45 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:27.005 16:29:45 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:27.005 16:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:27.005 16:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:27.005 16:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:27.005 16:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:14:27.005 16:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:14:27.005 16:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:27.005 16:29:45 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:27.005 16:29:45 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:14:27.005 16:29:45 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.eB2xTuI8Jf 00:14:27.005 16:29:45 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:27.005 16:29:45 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.x4tJTxNo6E 00:14:27.005 16:29:45 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:27.005 16:29:45 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:27.005 16:29:45 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.eB2xTuI8Jf 00:14:27.005 16:29:45 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.x4tJTxNo6E 00:14:27.005 16:29:45 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:27.263 16:29:45 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:27.520 16:29:45 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.eB2xTuI8Jf 00:14:27.520 16:29:45 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.eB2xTuI8Jf 00:14:27.520 16:29:45 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:27.779 [2024-07-21 16:29:45.904840] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:27.779 16:29:45 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:28.036 16:29:46 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:28.295 [2024-07-21 16:29:46.316871] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:28.295 [2024-07-21 16:29:46.317088] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.295 16:29:46 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:28.552 malloc0 00:14:28.552 16:29:46 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:28.810 16:29:46 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eB2xTuI8Jf 00:14:29.067 [2024-07-21 16:29:47.054993] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:29.067 16:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.eB2xTuI8Jf 00:14:39.118 Initializing NVMe Controllers 00:14:39.118 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:39.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:39.118 Initialization complete. Launching workers. 00:14:39.118 ======================================================== 00:14:39.118 Latency(us) 00:14:39.118 Device Information : IOPS MiB/s Average min max 00:14:39.118 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11855.17 46.31 5399.48 900.28 6362.43 00:14:39.118 ======================================================== 00:14:39.118 Total : 11855.17 46.31 5399.48 900.28 6362.43 00:14:39.118 00:14:39.118 16:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eB2xTuI8Jf 00:14:39.118 16:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:39.118 16:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:39.118 16:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:39.118 16:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.eB2xTuI8Jf' 00:14:39.118 16:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:39.118 16:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83923 00:14:39.118 16:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:39.118 16:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:39.118 16:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83923 /var/tmp/bdevperf.sock 00:14:39.118 16:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83923 ']' 00:14:39.118 16:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:39.118 16:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:39.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:39.118 16:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:39.118 16:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:39.118 16:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.118 [2024-07-21 16:29:57.318972] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:14:39.118 [2024-07-21 16:29:57.319085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83923 ] 00:14:39.375 [2024-07-21 16:29:57.461038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.375 [2024-07-21 16:29:57.570803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:40.306 16:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:40.307 16:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:40.307 16:29:58 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eB2xTuI8Jf 00:14:40.307 [2024-07-21 16:29:58.490934] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:40.307 [2024-07-21 16:29:58.491063] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:40.564 TLSTESTn1 00:14:40.564 16:29:58 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:40.564 Running I/O for 10 seconds... 00:14:50.572 00:14:50.572 Latency(us) 00:14:50.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.572 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:50.572 Verification LBA range: start 0x0 length 0x2000 00:14:50.572 TLSTESTn1 : 10.01 4567.68 17.84 0.00 0.00 27979.57 3783.21 22520.55 00:14:50.572 =================================================================================================================== 00:14:50.572 Total : 4567.68 17.84 0.00 0.00 27979.57 3783.21 22520.55 00:14:50.572 0 00:14:50.572 16:30:08 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:50.572 16:30:08 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 83923 00:14:50.572 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83923 ']' 00:14:50.572 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83923 00:14:50.572 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:50.572 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:50.572 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83923 00:14:50.572 killing process with pid 83923 00:14:50.572 Received shutdown signal, test time was about 10.000000 seconds 00:14:50.572 00:14:50.572 Latency(us) 00:14:50.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.572 =================================================================================================================== 00:14:50.572 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:50.572 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:50.572 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:50.572 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83923' 00:14:50.572 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83923 00:14:50.572 [2024-07-21 16:30:08.721943] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:50.572 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83923 00:14:50.831 16:30:08 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.x4tJTxNo6E 00:14:50.831 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:50.831 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.x4tJTxNo6E 00:14:50.831 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:50.831 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:50.831 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:50.831 16:30:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:50.831 16:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.x4tJTxNo6E 00:14:50.831 16:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:50.831 16:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:50.831 16:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:50.831 16:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.x4tJTxNo6E' 00:14:50.831 16:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:50.831 16:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84069 00:14:50.831 16:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:50.831 16:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:50.831 16:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84069 /var/tmp/bdevperf.sock 00:14:50.831 16:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84069 ']' 00:14:50.831 16:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:50.831 16:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:50.831 16:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:50.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:50.831 16:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:50.831 16:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:51.090 [2024-07-21 16:30:09.064233] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:14:51.090 [2024-07-21 16:30:09.064365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84069 ] 00:14:51.090 [2024-07-21 16:30:09.195523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.090 [2024-07-21 16:30:09.277935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.026 16:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:52.026 16:30:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:52.026 16:30:09 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.x4tJTxNo6E 00:14:52.026 [2024-07-21 16:30:10.229833] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:52.026 [2024-07-21 16:30:10.229948] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:52.285 [2024-07-21 16:30:10.234870] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:52.285 [2024-07-21 16:30:10.235466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc1ca0 (107): Transport endpoint is not connected 00:14:52.285 [2024-07-21 16:30:10.236453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc1ca0 (9): Bad file descriptor 00:14:52.285 [2024-07-21 16:30:10.237448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:52.285 [2024-07-21 16:30:10.237469] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:52.285 [2024-07-21 16:30:10.237483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:52.285 2024/07/21 16:30:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.x4tJTxNo6E subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:52.285 request: 00:14:52.285 { 00:14:52.285 "method": "bdev_nvme_attach_controller", 00:14:52.285 "params": { 00:14:52.285 "name": "TLSTEST", 00:14:52.285 "trtype": "tcp", 00:14:52.285 "traddr": "10.0.0.2", 00:14:52.285 "adrfam": "ipv4", 00:14:52.285 "trsvcid": "4420", 00:14:52.285 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:52.285 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:52.285 "prchk_reftag": false, 00:14:52.285 "prchk_guard": false, 00:14:52.285 "hdgst": false, 00:14:52.285 "ddgst": false, 00:14:52.285 "psk": "/tmp/tmp.x4tJTxNo6E" 00:14:52.285 } 00:14:52.285 } 00:14:52.285 Got JSON-RPC error response 00:14:52.285 GoRPCClient: error on JSON-RPC call 00:14:52.285 16:30:10 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84069 00:14:52.285 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84069 ']' 00:14:52.285 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84069 00:14:52.285 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:52.285 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:52.285 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84069 00:14:52.285 killing process with pid 84069 00:14:52.285 Received shutdown signal, test time was about 10.000000 seconds 00:14:52.285 00:14:52.285 Latency(us) 00:14:52.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.285 =================================================================================================================== 00:14:52.285 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:52.285 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:52.285 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:52.285 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84069' 00:14:52.285 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84069 00:14:52.285 [2024-07-21 16:30:10.272768] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:52.285 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84069 00:14:52.543 16:30:10 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:52.543 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:52.543 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:52.543 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:52.543 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:52.543 16:30:10 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.eB2xTuI8Jf 00:14:52.543 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:52.543 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.eB2xTuI8Jf 00:14:52.543 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:52.543 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:52.543 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:52.543 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:52.543 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.eB2xTuI8Jf 00:14:52.543 16:30:10 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:52.543 16:30:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:52.543 16:30:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:52.543 16:30:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.eB2xTuI8Jf' 00:14:52.543 16:30:10 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:52.543 16:30:10 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84115 00:14:52.544 16:30:10 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:52.544 16:30:10 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:52.544 16:30:10 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84115 /var/tmp/bdevperf.sock 00:14:52.544 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84115 ']' 00:14:52.544 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:52.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:52.544 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:52.544 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:52.544 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:52.544 16:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:52.544 [2024-07-21 16:30:10.592765] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:14:52.544 [2024-07-21 16:30:10.592889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84115 ] 00:14:52.544 [2024-07-21 16:30:10.723252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.802 [2024-07-21 16:30:10.797986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.369 16:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:53.369 16:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:53.369 16:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.eB2xTuI8Jf 00:14:53.627 [2024-07-21 16:30:11.762717] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:53.627 [2024-07-21 16:30:11.762844] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:53.627 [2024-07-21 16:30:11.774231] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:53.627 [2024-07-21 16:30:11.774273] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:53.627 [2024-07-21 16:30:11.774322] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:53.627 [2024-07-21 16:30:11.774347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe15ca0 (107): Transport endpoint is not connected 00:14:53.627 [2024-07-21 16:30:11.775336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe15ca0 (9): Bad file descriptor 00:14:53.627 [2024-07-21 16:30:11.776332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:53.627 [2024-07-21 16:30:11.776356] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:53.627 [2024-07-21 16:30:11.776371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:53.627 2024/07/21 16:30:11 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.eB2xTuI8Jf subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:53.627 request: 00:14:53.627 { 00:14:53.627 "method": "bdev_nvme_attach_controller", 00:14:53.627 "params": { 00:14:53.627 "name": "TLSTEST", 00:14:53.627 "trtype": "tcp", 00:14:53.627 "traddr": "10.0.0.2", 00:14:53.627 "adrfam": "ipv4", 00:14:53.627 "trsvcid": "4420", 00:14:53.627 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.627 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:53.627 "prchk_reftag": false, 00:14:53.627 "prchk_guard": false, 00:14:53.627 "hdgst": false, 00:14:53.627 "ddgst": false, 00:14:53.627 "psk": "/tmp/tmp.eB2xTuI8Jf" 00:14:53.627 } 00:14:53.627 } 00:14:53.627 Got JSON-RPC error response 00:14:53.627 GoRPCClient: error on JSON-RPC call 00:14:53.627 16:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84115 00:14:53.627 16:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84115 ']' 00:14:53.627 16:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84115 00:14:53.627 16:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:53.627 16:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:53.627 16:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84115 00:14:53.627 killing process with pid 84115 00:14:53.627 Received shutdown signal, test time was about 10.000000 seconds 00:14:53.627 00:14:53.627 Latency(us) 00:14:53.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.627 =================================================================================================================== 00:14:53.627 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:53.627 16:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:53.627 16:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:53.627 16:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84115' 00:14:53.627 16:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84115 00:14:53.627 [2024-07-21 16:30:11.825569] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:53.627 16:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84115 00:14:53.885 16:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:53.885 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:53.885 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:53.885 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:53.885 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:53.885 16:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.eB2xTuI8Jf 00:14:53.885 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:53.885 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.eB2xTuI8Jf 00:14:53.885 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:53.885 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:53.885 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:53.885 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:53.886 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.eB2xTuI8Jf 00:14:53.886 16:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:53.886 16:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:53.886 16:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:53.886 16:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.eB2xTuI8Jf' 00:14:53.886 16:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:54.143 16:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84159 00:14:54.143 16:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:54.143 16:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:54.143 16:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84159 /var/tmp/bdevperf.sock 00:14:54.143 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84159 ']' 00:14:54.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:54.143 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:54.143 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:54.143 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:54.143 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:54.143 16:30:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:54.143 [2024-07-21 16:30:12.138589] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:14:54.143 [2024-07-21 16:30:12.138679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84159 ] 00:14:54.143 [2024-07-21 16:30:12.268385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.401 [2024-07-21 16:30:12.356423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.968 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:54.968 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:54.968 16:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eB2xTuI8Jf 00:14:55.225 [2024-07-21 16:30:13.316519] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:55.225 [2024-07-21 16:30:13.316677] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:55.225 [2024-07-21 16:30:13.324419] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:55.225 [2024-07-21 16:30:13.324453] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:55.225 [2024-07-21 16:30:13.324499] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:55.225 [2024-07-21 16:30:13.325171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff6ca0 (107): Transport endpoint is not connected 00:14:55.225 [2024-07-21 16:30:13.326159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff6ca0 (9): Bad file descriptor 00:14:55.225 [2024-07-21 16:30:13.327155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:55.225 [2024-07-21 16:30:13.327177] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:55.225 [2024-07-21 16:30:13.327191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:55.225 2024/07/21 16:30:13 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.eB2xTuI8Jf subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:55.225 request: 00:14:55.225 { 00:14:55.225 "method": "bdev_nvme_attach_controller", 00:14:55.225 "params": { 00:14:55.225 "name": "TLSTEST", 00:14:55.225 "trtype": "tcp", 00:14:55.225 "traddr": "10.0.0.2", 00:14:55.225 "adrfam": "ipv4", 00:14:55.225 "trsvcid": "4420", 00:14:55.225 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:55.225 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:55.225 "prchk_reftag": false, 00:14:55.225 "prchk_guard": false, 00:14:55.225 "hdgst": false, 00:14:55.225 "ddgst": false, 00:14:55.225 "psk": "/tmp/tmp.eB2xTuI8Jf" 00:14:55.225 } 00:14:55.225 } 00:14:55.225 Got JSON-RPC error response 00:14:55.225 GoRPCClient: error on JSON-RPC call 00:14:55.225 16:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84159 00:14:55.225 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84159 ']' 00:14:55.225 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84159 00:14:55.225 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:55.225 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:55.225 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84159 00:14:55.225 killing process with pid 84159 00:14:55.225 Received shutdown signal, test time was about 10.000000 seconds 00:14:55.225 00:14:55.225 Latency(us) 00:14:55.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.225 =================================================================================================================== 00:14:55.225 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:55.225 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:55.225 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:55.225 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84159' 00:14:55.225 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84159 00:14:55.225 [2024-07-21 16:30:13.369865] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:55.225 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84159 00:14:55.483 16:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:55.483 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:55.483 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:55.483 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:55.483 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:55.483 16:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:55.483 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:55.483 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:55.483 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:55.483 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:55.483 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:55.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:55.483 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:55.483 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:55.483 16:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:55.483 16:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:55.483 16:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:55.483 16:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:55.483 16:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:55.483 16:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84206 00:14:55.483 16:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:55.483 16:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84206 /var/tmp/bdevperf.sock 00:14:55.483 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84206 ']' 00:14:55.483 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:55.483 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:55.483 16:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:55.483 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:55.483 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:55.483 16:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:55.741 [2024-07-21 16:30:13.693912] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:14:55.741 [2024-07-21 16:30:13.694031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84206 ] 00:14:55.741 [2024-07-21 16:30:13.824017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.741 [2024-07-21 16:30:13.899953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.672 16:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:56.672 16:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:56.672 16:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:56.672 [2024-07-21 16:30:14.824599] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:56.672 [2024-07-21 16:30:14.826157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d01240 (9): Bad file descriptor 00:14:56.672 [2024-07-21 16:30:14.827150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:56.672 [2024-07-21 16:30:14.827176] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:56.672 [2024-07-21 16:30:14.827191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:56.672 2024/07/21 16:30:14 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:56.672 request: 00:14:56.672 { 00:14:56.672 "method": "bdev_nvme_attach_controller", 00:14:56.672 "params": { 00:14:56.672 "name": "TLSTEST", 00:14:56.672 "trtype": "tcp", 00:14:56.672 "traddr": "10.0.0.2", 00:14:56.672 "adrfam": "ipv4", 00:14:56.672 "trsvcid": "4420", 00:14:56.672 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:56.672 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:56.672 "prchk_reftag": false, 00:14:56.672 "prchk_guard": false, 00:14:56.672 "hdgst": false, 00:14:56.672 "ddgst": false 00:14:56.672 } 00:14:56.672 } 00:14:56.672 Got JSON-RPC error response 00:14:56.672 GoRPCClient: error on JSON-RPC call 00:14:56.672 16:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84206 00:14:56.672 16:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84206 ']' 00:14:56.672 16:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84206 00:14:56.672 16:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:56.672 16:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:56.672 16:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84206 00:14:56.672 killing process with pid 84206 00:14:56.672 Received shutdown signal, test time was about 10.000000 seconds 00:14:56.672 00:14:56.672 Latency(us) 00:14:56.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.672 =================================================================================================================== 00:14:56.672 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:56.672 16:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:56.672 16:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:56.672 16:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84206' 00:14:56.672 16:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84206 00:14:56.672 16:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84206 00:14:56.936 16:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:56.936 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:56.936 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:56.936 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:56.936 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:56.936 16:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 83564 00:14:56.936 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83564 ']' 00:14:56.936 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83564 00:14:56.936 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:56.936 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:56.936 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83564 00:14:57.193 killing process with pid 83564 00:14:57.193 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:57.193 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:57.193 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83564' 00:14:57.193 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83564 00:14:57.193 [2024-07-21 16:30:15.155875] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:57.193 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83564 00:14:57.451 16:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:57.451 16:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:57.451 16:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:57.451 16:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:57.451 16:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:57.451 16:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:14:57.451 16:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:57.451 16:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:57.451 16:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:14:57.451 16:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.kTbHpq3SyG 00:14:57.451 16:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:57.451 16:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.kTbHpq3SyG 00:14:57.451 16:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:14:57.451 16:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:57.451 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:57.451 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:57.451 16:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84262 00:14:57.451 16:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84262 00:14:57.451 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84262 ']' 00:14:57.451 16:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:57.451 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.451 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:57.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.451 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.451 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:57.451 16:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:57.451 [2024-07-21 16:30:15.552229] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:14:57.451 [2024-07-21 16:30:15.552346] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.709 [2024-07-21 16:30:15.691193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.709 [2024-07-21 16:30:15.773964] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.709 [2024-07-21 16:30:15.774024] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.709 [2024-07-21 16:30:15.774034] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.709 [2024-07-21 16:30:15.774041] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.709 [2024-07-21 16:30:15.774047] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.709 [2024-07-21 16:30:15.774072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.642 16:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:58.642 16:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:58.642 16:30:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:58.642 16:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:58.642 16:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:58.642 16:30:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.642 16:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.kTbHpq3SyG 00:14:58.642 16:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.kTbHpq3SyG 00:14:58.642 16:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:58.642 [2024-07-21 16:30:16.818236] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.642 16:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:58.900 16:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:59.159 [2024-07-21 16:30:17.310336] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:59.159 [2024-07-21 16:30:17.310621] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:59.159 16:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:59.417 malloc0 00:14:59.417 16:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:59.676 16:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kTbHpq3SyG 00:14:59.935 [2024-07-21 16:30:17.944503] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:59.935 16:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kTbHpq3SyG 00:14:59.935 16:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:59.935 16:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:59.935 16:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:59.935 16:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kTbHpq3SyG' 00:14:59.935 16:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:59.935 16:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84366 00:14:59.935 16:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:59.935 16:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:59.935 16:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84366 /var/tmp/bdevperf.sock 00:14:59.935 16:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84366 ']' 00:14:59.935 16:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:59.935 16:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:59.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:59.935 16:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:59.935 16:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:59.935 16:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:59.935 [2024-07-21 16:30:18.019508] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:14:59.935 [2024-07-21 16:30:18.019602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84366 ] 00:15:00.192 [2024-07-21 16:30:18.156142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.192 [2024-07-21 16:30:18.258904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:01.126 16:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:01.126 16:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:01.126 16:30:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kTbHpq3SyG 00:15:01.126 [2024-07-21 16:30:19.185863] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:01.126 [2024-07-21 16:30:19.185982] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:01.126 TLSTESTn1 00:15:01.126 16:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:01.384 Running I/O for 10 seconds... 00:15:11.382 00:15:11.382 Latency(us) 00:15:11.382 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.382 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:11.382 Verification LBA range: start 0x0 length 0x2000 00:15:11.382 TLSTESTn1 : 10.02 4852.99 18.96 0.00 0.00 26327.46 6136.55 16443.58 00:15:11.382 =================================================================================================================== 00:15:11.382 Total : 4852.99 18.96 0.00 0.00 26327.46 6136.55 16443.58 00:15:11.382 0 00:15:11.382 16:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:11.382 16:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 84366 00:15:11.382 16:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84366 ']' 00:15:11.382 16:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84366 00:15:11.382 16:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:11.382 16:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:11.382 16:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84366 00:15:11.382 16:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:11.382 killing process with pid 84366 00:15:11.382 16:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:11.382 16:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84366' 00:15:11.382 Received shutdown signal, test time was about 10.000000 seconds 00:15:11.382 00:15:11.382 Latency(us) 00:15:11.382 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.382 =================================================================================================================== 00:15:11.382 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:11.382 16:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84366 00:15:11.382 [2024-07-21 16:30:29.464731] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:11.382 16:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84366 00:15:11.640 16:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.kTbHpq3SyG 00:15:11.640 16:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kTbHpq3SyG 00:15:11.640 16:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:11.640 16:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kTbHpq3SyG 00:15:11.640 16:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:11.640 16:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:11.640 16:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:11.640 16:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:11.640 16:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kTbHpq3SyG 00:15:11.640 16:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:11.640 16:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:11.640 16:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:11.640 16:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kTbHpq3SyG' 00:15:11.640 16:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:11.640 16:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:11.640 16:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84519 00:15:11.641 16:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:11.641 16:30:29 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84519 /var/tmp/bdevperf.sock 00:15:11.641 16:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84519 ']' 00:15:11.641 16:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:11.641 16:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:11.641 16:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:11.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:11.641 16:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:11.641 16:30:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.641 [2024-07-21 16:30:29.793217] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:15:11.641 [2024-07-21 16:30:29.793315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84519 ] 00:15:11.899 [2024-07-21 16:30:29.919187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.899 [2024-07-21 16:30:29.994875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.157 16:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:12.157 16:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:12.157 16:30:30 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kTbHpq3SyG 00:15:12.416 [2024-07-21 16:30:30.369341] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:12.416 [2024-07-21 16:30:30.369434] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:12.416 [2024-07-21 16:30:30.369445] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.kTbHpq3SyG 00:15:12.416 2024/07/21 16:30:30 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.kTbHpq3SyG subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:15:12.416 request: 00:15:12.416 { 00:15:12.416 "method": "bdev_nvme_attach_controller", 00:15:12.416 "params": { 00:15:12.416 "name": "TLSTEST", 00:15:12.416 "trtype": "tcp", 00:15:12.416 "traddr": "10.0.0.2", 00:15:12.416 "adrfam": "ipv4", 00:15:12.416 "trsvcid": "4420", 00:15:12.416 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:12.416 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:12.416 "prchk_reftag": false, 00:15:12.416 "prchk_guard": false, 00:15:12.416 "hdgst": false, 00:15:12.416 "ddgst": false, 00:15:12.416 "psk": "/tmp/tmp.kTbHpq3SyG" 00:15:12.416 } 00:15:12.416 } 00:15:12.416 Got JSON-RPC error response 00:15:12.416 GoRPCClient: error on JSON-RPC call 00:15:12.416 16:30:30 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84519 00:15:12.416 16:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84519 ']' 00:15:12.416 16:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84519 00:15:12.416 16:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:12.416 16:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:12.416 16:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84519 00:15:12.416 16:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:12.416 killing process with pid 84519 00:15:12.416 16:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:12.416 16:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84519' 00:15:12.416 Received shutdown signal, test time was about 10.000000 seconds 00:15:12.416 00:15:12.416 Latency(us) 00:15:12.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.416 =================================================================================================================== 00:15:12.416 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:12.416 16:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84519 00:15:12.416 16:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84519 00:15:12.674 16:30:30 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:12.674 16:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:12.674 16:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:12.674 16:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:12.674 16:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:12.674 16:30:30 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 84262 00:15:12.674 16:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84262 ']' 00:15:12.674 16:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84262 00:15:12.674 16:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:12.674 16:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:12.674 16:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84262 00:15:12.674 16:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:12.674 killing process with pid 84262 00:15:12.674 16:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:12.674 16:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84262' 00:15:12.674 16:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84262 00:15:12.674 [2024-07-21 16:30:30.738165] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:12.674 16:30:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84262 00:15:12.932 16:30:31 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:15:12.932 16:30:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:12.932 16:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:12.932 16:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:12.932 16:30:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84556 00:15:12.932 16:30:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:12.932 16:30:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84556 00:15:12.932 16:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84556 ']' 00:15:12.932 16:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.932 16:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:12.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.932 16:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.932 16:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:12.932 16:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:12.932 [2024-07-21 16:30:31.085715] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:15:12.932 [2024-07-21 16:30:31.085821] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.190 [2024-07-21 16:30:31.216164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.190 [2024-07-21 16:30:31.291168] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.190 [2024-07-21 16:30:31.291232] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.190 [2024-07-21 16:30:31.291242] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:13.190 [2024-07-21 16:30:31.291250] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:13.190 [2024-07-21 16:30:31.291256] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.190 [2024-07-21 16:30:31.291296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.123 16:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:14.123 16:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:14.123 16:30:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:14.123 16:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:14.123 16:30:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:14.124 16:30:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.124 16:30:32 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.kTbHpq3SyG 00:15:14.124 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:14.124 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.kTbHpq3SyG 00:15:14.124 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:15:14.124 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:14.124 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:15:14.124 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:14.124 16:30:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.kTbHpq3SyG 00:15:14.124 16:30:32 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.kTbHpq3SyG 00:15:14.124 16:30:32 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:14.124 [2024-07-21 16:30:32.277620] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:14.124 16:30:32 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:14.381 16:30:32 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:14.639 [2024-07-21 16:30:32.657660] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:14.639 [2024-07-21 16:30:32.657874] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:14.639 16:30:32 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:14.896 malloc0 00:15:14.896 16:30:32 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:15.154 16:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kTbHpq3SyG 00:15:15.412 [2024-07-21 16:30:33.403630] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:15.412 [2024-07-21 16:30:33.403700] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:15:15.412 [2024-07-21 16:30:33.403741] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:15.412 2024/07/21 16:30:33 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.kTbHpq3SyG], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:15:15.412 request: 00:15:15.412 { 00:15:15.412 "method": "nvmf_subsystem_add_host", 00:15:15.412 "params": { 00:15:15.412 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:15.412 "host": "nqn.2016-06.io.spdk:host1", 00:15:15.412 "psk": "/tmp/tmp.kTbHpq3SyG" 00:15:15.412 } 00:15:15.412 } 00:15:15.412 Got JSON-RPC error response 00:15:15.412 GoRPCClient: error on JSON-RPC call 00:15:15.412 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:15.412 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:15.412 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:15.412 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:15.412 16:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 84556 00:15:15.412 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84556 ']' 00:15:15.412 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84556 00:15:15.412 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:15.412 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:15.412 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84556 00:15:15.412 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:15.412 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:15.412 killing process with pid 84556 00:15:15.412 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84556' 00:15:15.412 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84556 00:15:15.412 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84556 00:15:15.670 16:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.kTbHpq3SyG 00:15:15.670 16:30:33 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:15:15.670 16:30:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:15.670 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:15.670 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:15.670 16:30:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84661 00:15:15.670 16:30:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:15.670 16:30:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84661 00:15:15.670 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84661 ']' 00:15:15.670 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.670 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:15.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.670 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.670 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:15.670 16:30:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:15.670 [2024-07-21 16:30:33.819788] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:15:15.670 [2024-07-21 16:30:33.819864] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.928 [2024-07-21 16:30:33.945801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.928 [2024-07-21 16:30:34.045594] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.928 [2024-07-21 16:30:34.045661] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.928 [2024-07-21 16:30:34.045672] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:15.928 [2024-07-21 16:30:34.045680] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:15.928 [2024-07-21 16:30:34.045687] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.928 [2024-07-21 16:30:34.045720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.860 16:30:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:16.860 16:30:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:16.860 16:30:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:16.860 16:30:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:16.860 16:30:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:16.860 16:30:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:16.860 16:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.kTbHpq3SyG 00:15:16.860 16:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.kTbHpq3SyG 00:15:16.860 16:30:34 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:16.860 [2024-07-21 16:30:35.023558] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:16.860 16:30:35 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:17.126 16:30:35 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:17.383 [2024-07-21 16:30:35.531591] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:17.383 [2024-07-21 16:30:35.531876] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:17.383 16:30:35 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:17.641 malloc0 00:15:17.641 16:30:35 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:18.210 16:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kTbHpq3SyG 00:15:18.210 [2024-07-21 16:30:36.321910] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:18.210 16:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:18.210 16:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=84768 00:15:18.210 16:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:18.211 16:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 84768 /var/tmp/bdevperf.sock 00:15:18.211 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84768 ']' 00:15:18.211 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:18.211 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:18.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:18.211 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:18.211 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:18.211 16:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:18.211 [2024-07-21 16:30:36.380906] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:15:18.211 [2024-07-21 16:30:36.381015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84768 ] 00:15:18.472 [2024-07-21 16:30:36.506199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.472 [2024-07-21 16:30:36.593350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:19.405 16:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:19.405 16:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:19.405 16:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kTbHpq3SyG 00:15:19.405 [2024-07-21 16:30:37.455740] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:19.405 [2024-07-21 16:30:37.455888] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:19.405 TLSTESTn1 00:15:19.405 16:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:19.663 16:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:15:19.663 "subsystems": [ 00:15:19.663 { 00:15:19.663 "subsystem": "keyring", 00:15:19.663 "config": [] 00:15:19.663 }, 00:15:19.663 { 00:15:19.663 "subsystem": "iobuf", 00:15:19.663 "config": [ 00:15:19.663 { 00:15:19.663 "method": "iobuf_set_options", 00:15:19.663 "params": { 00:15:19.663 "large_bufsize": 135168, 00:15:19.663 "large_pool_count": 1024, 00:15:19.663 "small_bufsize": 8192, 00:15:19.663 "small_pool_count": 8192 00:15:19.663 } 00:15:19.663 } 00:15:19.663 ] 00:15:19.663 }, 00:15:19.663 { 00:15:19.663 "subsystem": "sock", 00:15:19.663 "config": [ 00:15:19.663 { 00:15:19.663 "method": "sock_set_default_impl", 00:15:19.663 "params": { 00:15:19.663 "impl_name": "posix" 00:15:19.663 } 00:15:19.663 }, 00:15:19.663 { 00:15:19.663 "method": "sock_impl_set_options", 00:15:19.663 "params": { 00:15:19.663 "enable_ktls": false, 00:15:19.663 "enable_placement_id": 0, 00:15:19.663 "enable_quickack": false, 00:15:19.663 "enable_recv_pipe": true, 00:15:19.663 "enable_zerocopy_send_client": false, 00:15:19.663 "enable_zerocopy_send_server": true, 00:15:19.663 "impl_name": "ssl", 00:15:19.663 "recv_buf_size": 4096, 00:15:19.663 "send_buf_size": 4096, 00:15:19.663 "tls_version": 0, 00:15:19.663 "zerocopy_threshold": 0 00:15:19.663 } 00:15:19.663 }, 00:15:19.663 { 00:15:19.663 "method": "sock_impl_set_options", 00:15:19.663 "params": { 00:15:19.663 "enable_ktls": false, 00:15:19.663 "enable_placement_id": 0, 00:15:19.663 "enable_quickack": false, 00:15:19.663 "enable_recv_pipe": true, 00:15:19.663 "enable_zerocopy_send_client": false, 00:15:19.663 "enable_zerocopy_send_server": true, 00:15:19.663 "impl_name": "posix", 00:15:19.663 "recv_buf_size": 2097152, 00:15:19.663 "send_buf_size": 2097152, 00:15:19.663 "tls_version": 0, 00:15:19.663 "zerocopy_threshold": 0 00:15:19.663 } 00:15:19.663 } 00:15:19.663 ] 00:15:19.663 }, 00:15:19.663 { 00:15:19.663 "subsystem": "vmd", 00:15:19.663 "config": [] 00:15:19.663 }, 00:15:19.663 { 00:15:19.664 "subsystem": "accel", 00:15:19.664 "config": [ 00:15:19.664 { 00:15:19.664 "method": "accel_set_options", 00:15:19.664 "params": { 00:15:19.664 "buf_count": 2048, 00:15:19.664 "large_cache_size": 16, 00:15:19.664 "sequence_count": 2048, 00:15:19.664 "small_cache_size": 128, 00:15:19.664 "task_count": 2048 00:15:19.664 } 00:15:19.664 } 00:15:19.664 ] 00:15:19.664 }, 00:15:19.664 { 00:15:19.664 "subsystem": "bdev", 00:15:19.664 "config": [ 00:15:19.664 { 00:15:19.664 "method": "bdev_set_options", 00:15:19.664 "params": { 00:15:19.664 "bdev_auto_examine": true, 00:15:19.664 "bdev_io_cache_size": 256, 00:15:19.664 "bdev_io_pool_size": 65535, 00:15:19.664 "iobuf_large_cache_size": 16, 00:15:19.664 "iobuf_small_cache_size": 128 00:15:19.664 } 00:15:19.664 }, 00:15:19.664 { 00:15:19.664 "method": "bdev_raid_set_options", 00:15:19.664 "params": { 00:15:19.664 "process_max_bandwidth_mb_sec": 0, 00:15:19.664 "process_window_size_kb": 1024 00:15:19.664 } 00:15:19.664 }, 00:15:19.664 { 00:15:19.664 "method": "bdev_iscsi_set_options", 00:15:19.664 "params": { 00:15:19.664 "timeout_sec": 30 00:15:19.664 } 00:15:19.664 }, 00:15:19.664 { 00:15:19.664 "method": "bdev_nvme_set_options", 00:15:19.664 "params": { 00:15:19.664 "action_on_timeout": "none", 00:15:19.664 "allow_accel_sequence": false, 00:15:19.664 "arbitration_burst": 0, 00:15:19.664 "bdev_retry_count": 3, 00:15:19.664 "ctrlr_loss_timeout_sec": 0, 00:15:19.664 "delay_cmd_submit": true, 00:15:19.664 "dhchap_dhgroups": [ 00:15:19.664 "null", 00:15:19.664 "ffdhe2048", 00:15:19.664 "ffdhe3072", 00:15:19.664 "ffdhe4096", 00:15:19.664 "ffdhe6144", 00:15:19.664 "ffdhe8192" 00:15:19.664 ], 00:15:19.664 "dhchap_digests": [ 00:15:19.664 "sha256", 00:15:19.664 "sha384", 00:15:19.664 "sha512" 00:15:19.664 ], 00:15:19.664 "disable_auto_failback": false, 00:15:19.664 "fast_io_fail_timeout_sec": 0, 00:15:19.664 "generate_uuids": false, 00:15:19.664 "high_priority_weight": 0, 00:15:19.664 "io_path_stat": false, 00:15:19.664 "io_queue_requests": 0, 00:15:19.664 "keep_alive_timeout_ms": 10000, 00:15:19.664 "low_priority_weight": 0, 00:15:19.664 "medium_priority_weight": 0, 00:15:19.664 "nvme_adminq_poll_period_us": 10000, 00:15:19.664 "nvme_error_stat": false, 00:15:19.664 "nvme_ioq_poll_period_us": 0, 00:15:19.664 "rdma_cm_event_timeout_ms": 0, 00:15:19.664 "rdma_max_cq_size": 0, 00:15:19.664 "rdma_srq_size": 0, 00:15:19.664 "reconnect_delay_sec": 0, 00:15:19.664 "timeout_admin_us": 0, 00:15:19.664 "timeout_us": 0, 00:15:19.664 "transport_ack_timeout": 0, 00:15:19.664 "transport_retry_count": 4, 00:15:19.664 "transport_tos": 0 00:15:19.664 } 00:15:19.664 }, 00:15:19.664 { 00:15:19.664 "method": "bdev_nvme_set_hotplug", 00:15:19.664 "params": { 00:15:19.664 "enable": false, 00:15:19.664 "period_us": 100000 00:15:19.664 } 00:15:19.664 }, 00:15:19.664 { 00:15:19.664 "method": "bdev_malloc_create", 00:15:19.664 "params": { 00:15:19.664 "block_size": 4096, 00:15:19.664 "name": "malloc0", 00:15:19.664 "num_blocks": 8192, 00:15:19.664 "optimal_io_boundary": 0, 00:15:19.664 "physical_block_size": 4096, 00:15:19.664 "uuid": "9d1bad7f-c4a5-462f-9bd1-218b5d3d5562" 00:15:19.664 } 00:15:19.664 }, 00:15:19.664 { 00:15:19.664 "method": "bdev_wait_for_examine" 00:15:19.664 } 00:15:19.664 ] 00:15:19.664 }, 00:15:19.664 { 00:15:19.664 "subsystem": "nbd", 00:15:19.664 "config": [] 00:15:19.664 }, 00:15:19.664 { 00:15:19.664 "subsystem": "scheduler", 00:15:19.664 "config": [ 00:15:19.664 { 00:15:19.664 "method": "framework_set_scheduler", 00:15:19.664 "params": { 00:15:19.664 "name": "static" 00:15:19.664 } 00:15:19.664 } 00:15:19.664 ] 00:15:19.664 }, 00:15:19.664 { 00:15:19.664 "subsystem": "nvmf", 00:15:19.664 "config": [ 00:15:19.664 { 00:15:19.664 "method": "nvmf_set_config", 00:15:19.664 "params": { 00:15:19.664 "admin_cmd_passthru": { 00:15:19.664 "identify_ctrlr": false 00:15:19.664 }, 00:15:19.664 "discovery_filter": "match_any" 00:15:19.664 } 00:15:19.664 }, 00:15:19.664 { 00:15:19.664 "method": "nvmf_set_max_subsystems", 00:15:19.664 "params": { 00:15:19.664 "max_subsystems": 1024 00:15:19.664 } 00:15:19.664 }, 00:15:19.664 { 00:15:19.664 "method": "nvmf_set_crdt", 00:15:19.664 "params": { 00:15:19.664 "crdt1": 0, 00:15:19.664 "crdt2": 0, 00:15:19.664 "crdt3": 0 00:15:19.664 } 00:15:19.664 }, 00:15:19.664 { 00:15:19.664 "method": "nvmf_create_transport", 00:15:19.664 "params": { 00:15:19.664 "abort_timeout_sec": 1, 00:15:19.664 "ack_timeout": 0, 00:15:19.664 "buf_cache_size": 4294967295, 00:15:19.664 "c2h_success": false, 00:15:19.664 "data_wr_pool_size": 0, 00:15:19.664 "dif_insert_or_strip": false, 00:15:19.664 "in_capsule_data_size": 4096, 00:15:19.664 "io_unit_size": 131072, 00:15:19.664 "max_aq_depth": 128, 00:15:19.664 "max_io_qpairs_per_ctrlr": 127, 00:15:19.664 "max_io_size": 131072, 00:15:19.664 "max_queue_depth": 128, 00:15:19.664 "num_shared_buffers": 511, 00:15:19.664 "sock_priority": 0, 00:15:19.664 "trtype": "TCP", 00:15:19.664 "zcopy": false 00:15:19.664 } 00:15:19.664 }, 00:15:19.664 { 00:15:19.664 "method": "nvmf_create_subsystem", 00:15:19.664 "params": { 00:15:19.664 "allow_any_host": false, 00:15:19.664 "ana_reporting": false, 00:15:19.664 "max_cntlid": 65519, 00:15:19.664 "max_namespaces": 10, 00:15:19.664 "min_cntlid": 1, 00:15:19.664 "model_number": "SPDK bdev Controller", 00:15:19.664 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:19.664 "serial_number": "SPDK00000000000001" 00:15:19.664 } 00:15:19.664 }, 00:15:19.664 { 00:15:19.664 "method": "nvmf_subsystem_add_host", 00:15:19.664 "params": { 00:15:19.664 "host": "nqn.2016-06.io.spdk:host1", 00:15:19.664 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:19.664 "psk": "/tmp/tmp.kTbHpq3SyG" 00:15:19.664 } 00:15:19.664 }, 00:15:19.664 { 00:15:19.664 "method": "nvmf_subsystem_add_ns", 00:15:19.664 "params": { 00:15:19.664 "namespace": { 00:15:19.664 "bdev_name": "malloc0", 00:15:19.664 "nguid": "9D1BAD7FC4A5462F9BD1218B5D3D5562", 00:15:19.664 "no_auto_visible": false, 00:15:19.664 "nsid": 1, 00:15:19.664 "uuid": "9d1bad7f-c4a5-462f-9bd1-218b5d3d5562" 00:15:19.664 }, 00:15:19.664 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:19.664 } 00:15:19.664 }, 00:15:19.664 { 00:15:19.664 "method": "nvmf_subsystem_add_listener", 00:15:19.664 "params": { 00:15:19.664 "listen_address": { 00:15:19.664 "adrfam": "IPv4", 00:15:19.664 "traddr": "10.0.0.2", 00:15:19.664 "trsvcid": "4420", 00:15:19.664 "trtype": "TCP" 00:15:19.664 }, 00:15:19.664 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:19.664 "secure_channel": true 00:15:19.664 } 00:15:19.664 } 00:15:19.664 ] 00:15:19.664 } 00:15:19.664 ] 00:15:19.664 }' 00:15:19.664 16:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:20.231 16:30:38 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:15:20.231 "subsystems": [ 00:15:20.231 { 00:15:20.231 "subsystem": "keyring", 00:15:20.231 "config": [] 00:15:20.231 }, 00:15:20.231 { 00:15:20.231 "subsystem": "iobuf", 00:15:20.231 "config": [ 00:15:20.231 { 00:15:20.231 "method": "iobuf_set_options", 00:15:20.231 "params": { 00:15:20.231 "large_bufsize": 135168, 00:15:20.231 "large_pool_count": 1024, 00:15:20.231 "small_bufsize": 8192, 00:15:20.231 "small_pool_count": 8192 00:15:20.231 } 00:15:20.231 } 00:15:20.231 ] 00:15:20.231 }, 00:15:20.231 { 00:15:20.231 "subsystem": "sock", 00:15:20.231 "config": [ 00:15:20.231 { 00:15:20.231 "method": "sock_set_default_impl", 00:15:20.231 "params": { 00:15:20.231 "impl_name": "posix" 00:15:20.231 } 00:15:20.231 }, 00:15:20.231 { 00:15:20.231 "method": "sock_impl_set_options", 00:15:20.231 "params": { 00:15:20.231 "enable_ktls": false, 00:15:20.231 "enable_placement_id": 0, 00:15:20.231 "enable_quickack": false, 00:15:20.231 "enable_recv_pipe": true, 00:15:20.231 "enable_zerocopy_send_client": false, 00:15:20.231 "enable_zerocopy_send_server": true, 00:15:20.231 "impl_name": "ssl", 00:15:20.231 "recv_buf_size": 4096, 00:15:20.231 "send_buf_size": 4096, 00:15:20.231 "tls_version": 0, 00:15:20.231 "zerocopy_threshold": 0 00:15:20.231 } 00:15:20.231 }, 00:15:20.231 { 00:15:20.231 "method": "sock_impl_set_options", 00:15:20.231 "params": { 00:15:20.231 "enable_ktls": false, 00:15:20.231 "enable_placement_id": 0, 00:15:20.231 "enable_quickack": false, 00:15:20.231 "enable_recv_pipe": true, 00:15:20.231 "enable_zerocopy_send_client": false, 00:15:20.231 "enable_zerocopy_send_server": true, 00:15:20.231 "impl_name": "posix", 00:15:20.231 "recv_buf_size": 2097152, 00:15:20.231 "send_buf_size": 2097152, 00:15:20.231 "tls_version": 0, 00:15:20.231 "zerocopy_threshold": 0 00:15:20.231 } 00:15:20.231 } 00:15:20.231 ] 00:15:20.231 }, 00:15:20.231 { 00:15:20.231 "subsystem": "vmd", 00:15:20.231 "config": [] 00:15:20.231 }, 00:15:20.231 { 00:15:20.231 "subsystem": "accel", 00:15:20.231 "config": [ 00:15:20.231 { 00:15:20.231 "method": "accel_set_options", 00:15:20.231 "params": { 00:15:20.231 "buf_count": 2048, 00:15:20.231 "large_cache_size": 16, 00:15:20.231 "sequence_count": 2048, 00:15:20.231 "small_cache_size": 128, 00:15:20.231 "task_count": 2048 00:15:20.231 } 00:15:20.231 } 00:15:20.231 ] 00:15:20.231 }, 00:15:20.231 { 00:15:20.231 "subsystem": "bdev", 00:15:20.231 "config": [ 00:15:20.231 { 00:15:20.231 "method": "bdev_set_options", 00:15:20.231 "params": { 00:15:20.231 "bdev_auto_examine": true, 00:15:20.231 "bdev_io_cache_size": 256, 00:15:20.231 "bdev_io_pool_size": 65535, 00:15:20.231 "iobuf_large_cache_size": 16, 00:15:20.231 "iobuf_small_cache_size": 128 00:15:20.231 } 00:15:20.231 }, 00:15:20.231 { 00:15:20.231 "method": "bdev_raid_set_options", 00:15:20.231 "params": { 00:15:20.231 "process_max_bandwidth_mb_sec": 0, 00:15:20.231 "process_window_size_kb": 1024 00:15:20.231 } 00:15:20.231 }, 00:15:20.231 { 00:15:20.231 "method": "bdev_iscsi_set_options", 00:15:20.231 "params": { 00:15:20.231 "timeout_sec": 30 00:15:20.231 } 00:15:20.231 }, 00:15:20.231 { 00:15:20.231 "method": "bdev_nvme_set_options", 00:15:20.231 "params": { 00:15:20.231 "action_on_timeout": "none", 00:15:20.231 "allow_accel_sequence": false, 00:15:20.231 "arbitration_burst": 0, 00:15:20.231 "bdev_retry_count": 3, 00:15:20.231 "ctrlr_loss_timeout_sec": 0, 00:15:20.231 "delay_cmd_submit": true, 00:15:20.231 "dhchap_dhgroups": [ 00:15:20.231 "null", 00:15:20.231 "ffdhe2048", 00:15:20.231 "ffdhe3072", 00:15:20.231 "ffdhe4096", 00:15:20.231 "ffdhe6144", 00:15:20.231 "ffdhe8192" 00:15:20.231 ], 00:15:20.231 "dhchap_digests": [ 00:15:20.231 "sha256", 00:15:20.231 "sha384", 00:15:20.231 "sha512" 00:15:20.231 ], 00:15:20.231 "disable_auto_failback": false, 00:15:20.231 "fast_io_fail_timeout_sec": 0, 00:15:20.231 "generate_uuids": false, 00:15:20.231 "high_priority_weight": 0, 00:15:20.231 "io_path_stat": false, 00:15:20.231 "io_queue_requests": 512, 00:15:20.231 "keep_alive_timeout_ms": 10000, 00:15:20.231 "low_priority_weight": 0, 00:15:20.231 "medium_priority_weight": 0, 00:15:20.231 "nvme_adminq_poll_period_us": 10000, 00:15:20.231 "nvme_error_stat": false, 00:15:20.231 "nvme_ioq_poll_period_us": 0, 00:15:20.231 "rdma_cm_event_timeout_ms": 0, 00:15:20.231 "rdma_max_cq_size": 0, 00:15:20.231 "rdma_srq_size": 0, 00:15:20.231 "reconnect_delay_sec": 0, 00:15:20.231 "timeout_admin_us": 0, 00:15:20.231 "timeout_us": 0, 00:15:20.231 "transport_ack_timeout": 0, 00:15:20.231 "transport_retry_count": 4, 00:15:20.231 "transport_tos": 0 00:15:20.231 } 00:15:20.231 }, 00:15:20.231 { 00:15:20.231 "method": "bdev_nvme_attach_controller", 00:15:20.231 "params": { 00:15:20.231 "adrfam": "IPv4", 00:15:20.231 "ctrlr_loss_timeout_sec": 0, 00:15:20.231 "ddgst": false, 00:15:20.231 "fast_io_fail_timeout_sec": 0, 00:15:20.232 "hdgst": false, 00:15:20.232 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:20.232 "name": "TLSTEST", 00:15:20.232 "prchk_guard": false, 00:15:20.232 "prchk_reftag": false, 00:15:20.232 "psk": "/tmp/tmp.kTbHpq3SyG", 00:15:20.232 "reconnect_delay_sec": 0, 00:15:20.232 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:20.232 "traddr": "10.0.0.2", 00:15:20.232 "trsvcid": "4420", 00:15:20.232 "trtype": "TCP" 00:15:20.232 } 00:15:20.232 }, 00:15:20.232 { 00:15:20.232 "method": "bdev_nvme_set_hotplug", 00:15:20.232 "params": { 00:15:20.232 "enable": false, 00:15:20.232 "period_us": 100000 00:15:20.232 } 00:15:20.232 }, 00:15:20.232 { 00:15:20.232 "method": "bdev_wait_for_examine" 00:15:20.232 } 00:15:20.232 ] 00:15:20.232 }, 00:15:20.232 { 00:15:20.232 "subsystem": "nbd", 00:15:20.232 "config": [] 00:15:20.232 } 00:15:20.232 ] 00:15:20.232 }' 00:15:20.232 16:30:38 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 84768 00:15:20.232 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84768 ']' 00:15:20.232 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84768 00:15:20.232 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:20.232 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:20.232 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84768 00:15:20.232 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:20.232 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:20.232 killing process with pid 84768 00:15:20.232 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84768' 00:15:20.232 Received shutdown signal, test time was about 10.000000 seconds 00:15:20.232 00:15:20.232 Latency(us) 00:15:20.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.232 =================================================================================================================== 00:15:20.232 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:20.232 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84768 00:15:20.232 [2024-07-21 16:30:38.164598] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:20.232 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84768 00:15:20.490 16:30:38 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 84661 00:15:20.490 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84661 ']' 00:15:20.490 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84661 00:15:20.490 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:20.490 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:20.490 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84661 00:15:20.490 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:20.490 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:20.490 killing process with pid 84661 00:15:20.490 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84661' 00:15:20.490 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84661 00:15:20.490 [2024-07-21 16:30:38.496904] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:20.490 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84661 00:15:20.748 16:30:38 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:20.748 16:30:38 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:15:20.748 "subsystems": [ 00:15:20.748 { 00:15:20.748 "subsystem": "keyring", 00:15:20.748 "config": [] 00:15:20.748 }, 00:15:20.748 { 00:15:20.749 "subsystem": "iobuf", 00:15:20.749 "config": [ 00:15:20.749 { 00:15:20.749 "method": "iobuf_set_options", 00:15:20.749 "params": { 00:15:20.749 "large_bufsize": 135168, 00:15:20.749 "large_pool_count": 1024, 00:15:20.749 "small_bufsize": 8192, 00:15:20.749 "small_pool_count": 8192 00:15:20.749 } 00:15:20.749 } 00:15:20.749 ] 00:15:20.749 }, 00:15:20.749 { 00:15:20.749 "subsystem": "sock", 00:15:20.749 "config": [ 00:15:20.749 { 00:15:20.749 "method": "sock_set_default_impl", 00:15:20.749 "params": { 00:15:20.749 "impl_name": "posix" 00:15:20.749 } 00:15:20.749 }, 00:15:20.749 { 00:15:20.749 "method": "sock_impl_set_options", 00:15:20.749 "params": { 00:15:20.749 "enable_ktls": false, 00:15:20.749 "enable_placement_id": 0, 00:15:20.749 "enable_quickack": false, 00:15:20.749 "enable_recv_pipe": true, 00:15:20.749 "enable_zerocopy_send_client": false, 00:15:20.749 "enable_zerocopy_send_server": true, 00:15:20.749 "impl_name": "ssl", 00:15:20.749 "recv_buf_size": 4096, 00:15:20.749 "send_buf_size": 4096, 00:15:20.749 "tls_version": 0, 00:15:20.749 "zerocopy_threshold": 0 00:15:20.749 } 00:15:20.749 }, 00:15:20.749 { 00:15:20.749 "method": "sock_impl_set_options", 00:15:20.749 "params": { 00:15:20.749 "enable_ktls": false, 00:15:20.749 "enable_placement_id": 0, 00:15:20.749 "enable_quickack": false, 00:15:20.749 "enable_recv_pipe": true, 00:15:20.749 "enable_zerocopy_send_client": false, 00:15:20.749 "enable_zerocopy_send_server": true, 00:15:20.749 "impl_name": "posix", 00:15:20.749 "recv_buf_size": 2097152, 00:15:20.749 "send_buf_size": 2097152, 00:15:20.749 "tls_version": 0, 00:15:20.749 "zerocopy_threshold": 0 00:15:20.749 } 00:15:20.749 } 00:15:20.749 ] 00:15:20.749 }, 00:15:20.749 { 00:15:20.749 "subsystem": "vmd", 00:15:20.749 "config": [] 00:15:20.749 }, 00:15:20.749 { 00:15:20.749 "subsystem": "accel", 00:15:20.749 "config": [ 00:15:20.749 { 00:15:20.749 "method": "accel_set_options", 00:15:20.749 "params": { 00:15:20.749 "buf_count": 2048, 00:15:20.749 "large_cache_size": 16, 00:15:20.749 "sequence_count": 2048, 00:15:20.749 "small_cache_size": 128, 00:15:20.749 "task_count": 2048 00:15:20.749 } 00:15:20.749 } 00:15:20.749 ] 00:15:20.749 }, 00:15:20.749 { 00:15:20.749 "subsystem": "bdev", 00:15:20.749 "config": [ 00:15:20.749 { 00:15:20.749 "method": "bdev_set_options", 00:15:20.749 "params": { 00:15:20.749 "bdev_auto_examine": true, 00:15:20.749 "bdev_io_cache_size": 256, 00:15:20.749 "bdev_io_pool_size": 65535, 00:15:20.749 "iobuf_large_cache_size": 16, 00:15:20.749 "iobuf_small_cache_size": 128 00:15:20.749 } 00:15:20.749 }, 00:15:20.749 { 00:15:20.749 "method": "bdev_raid_set_options", 00:15:20.749 "params": { 00:15:20.749 "process_max_bandwidth_mb_sec": 0, 00:15:20.749 "process_window_size_kb": 1024 00:15:20.749 } 00:15:20.749 }, 00:15:20.749 { 00:15:20.749 "method": "bdev_iscsi_set_options", 00:15:20.749 "params": { 00:15:20.749 "timeout_sec": 30 00:15:20.749 } 00:15:20.749 }, 00:15:20.749 { 00:15:20.749 "method": "bdev_nvme_set_options", 00:15:20.749 "params": { 00:15:20.749 "action_on_timeout": "none", 00:15:20.749 "allow_accel_sequence": false, 00:15:20.749 "arbitration_burst": 0, 00:15:20.749 "bdev_retry_count": 3, 00:15:20.749 "ctrlr_loss_timeout_sec": 0, 00:15:20.749 "delay_cmd_submit": true, 00:15:20.749 "dhchap_dhgroups": [ 00:15:20.749 "null", 00:15:20.749 "ffdhe2048", 00:15:20.749 "ffdhe3072", 00:15:20.749 "ffdhe4096", 00:15:20.749 "ffdhe6144", 00:15:20.749 "ffdhe8192" 00:15:20.749 ], 00:15:20.749 "dhchap_digests": [ 00:15:20.749 "sha256", 00:15:20.749 "sha384", 00:15:20.749 "sha512" 00:15:20.749 ], 00:15:20.749 "disable_auto_failback": false, 00:15:20.749 "fast_io_fail_timeout_sec": 0, 00:15:20.749 "generate_uuids": false, 00:15:20.749 "high_priority_weight": 0, 00:15:20.749 "io_path_stat": false, 00:15:20.749 "io_queue_requests": 0, 00:15:20.749 "keep_alive_timeout_ms": 10000, 00:15:20.749 "low_priority_weight": 0, 00:15:20.749 "medium_priority_weight": 0, 00:15:20.749 "nvme_adminq_poll_period_us": 10000, 00:15:20.749 "nvme_error_stat": false, 00:15:20.749 "nvme_ioq_poll_period_us": 0, 00:15:20.749 "rdma_cm_event_timeout_ms": 0, 00:15:20.749 "rdma_max_cq_size": 0, 00:15:20.749 "rdma_srq_size": 0, 00:15:20.749 "reconnect_delay_sec": 0, 00:15:20.749 "timeout_admin_us": 0, 00:15:20.749 "timeout_us": 0, 00:15:20.749 "transport_ack_timeout": 0, 00:15:20.749 "transport_retry_count": 4, 00:15:20.749 "transport_tos": 0 00:15:20.749 } 00:15:20.749 }, 00:15:20.749 { 00:15:20.749 "method": "bdev_nvme_set_hotplug", 00:15:20.749 "params": { 00:15:20.749 "enable": false, 00:15:20.749 "period_us": 100000 00:15:20.749 } 00:15:20.749 }, 00:15:20.749 { 00:15:20.749 "method": "bdev_malloc_create", 00:15:20.749 "params": { 00:15:20.749 "block_size": 4096, 00:15:20.749 "name": "malloc0", 00:15:20.749 "num_blocks": 8192, 00:15:20.749 "optimal_io_boundary": 0, 00:15:20.749 "physical_block_size": 4096, 00:15:20.749 "uuid": "9d1bad7f-c4a5-462f-9bd1-218b5d3d5562" 00:15:20.749 } 00:15:20.749 }, 00:15:20.749 { 00:15:20.749 "method": "bdev_wait_for_examine" 00:15:20.749 } 00:15:20.749 ] 00:15:20.749 }, 00:15:20.749 { 00:15:20.749 "subsystem": "nbd", 00:15:20.749 "config": [] 00:15:20.749 }, 00:15:20.749 { 00:15:20.749 "subsystem": "scheduler", 00:15:20.749 "config": [ 00:15:20.749 { 00:15:20.749 "method": "framework_set_scheduler", 00:15:20.749 "params": { 00:15:20.749 "name": "static" 00:15:20.749 } 00:15:20.749 } 00:15:20.749 ] 00:15:20.749 }, 00:15:20.749 { 00:15:20.749 "subsystem": "nvmf", 00:15:20.749 "config": [ 00:15:20.749 { 00:15:20.749 "method": "nvmf_set_config", 00:15:20.749 "params": { 00:15:20.749 "admin_cmd_passthru": { 00:15:20.749 "identify_ctrlr": false 00:15:20.749 }, 00:15:20.749 "discovery_filter": "match_any" 00:15:20.749 } 00:15:20.749 }, 00:15:20.749 { 00:15:20.749 "method": "nvmf_set_max_subsystems", 00:15:20.749 "params": { 00:15:20.749 "max_subsystems": 1024 00:15:20.749 } 00:15:20.749 }, 00:15:20.749 { 00:15:20.749 "method": "nvmf_set_crdt", 00:15:20.749 "params": { 00:15:20.749 "crdt1": 0, 00:15:20.749 "crdt2": 0, 00:15:20.749 "crdt3": 0 00:15:20.749 } 00:15:20.749 }, 00:15:20.749 { 00:15:20.749 "method": "nvmf_create_transport", 00:15:20.749 "params": { 00:15:20.749 "abort_timeout_sec": 1, 00:15:20.749 "ack_timeout": 0, 00:15:20.749 "buf_cache_size": 4294967295, 00:15:20.749 "c2h_success": false, 00:15:20.749 "data_wr_pool_size": 0, 00:15:20.749 "dif_insert_or_strip": false, 00:15:20.749 "in_capsule_data_size": 4096, 00:15:20.749 "io_unit_size": 131072, 00:15:20.749 "max_aq_depth": 128, 00:15:20.749 "max_io_qpairs_per_ctrlr": 127, 00:15:20.749 "max_io_size": 131072, 00:15:20.749 "max_queue_depth": 128, 00:15:20.749 "num_shared_buffers": 511, 00:15:20.749 "sock_priority": 0, 00:15:20.749 "trtype": "TCP", 00:15:20.749 "zcopy": false 00:15:20.749 } 00:15:20.749 }, 00:15:20.749 { 00:15:20.749 "method": "nvmf_create_subsystem", 00:15:20.749 "params": { 00:15:20.749 "allow_any_host": false, 00:15:20.749 "ana_reporting": false, 00:15:20.749 "max_cntlid": 65519, 00:15:20.749 "max_namespaces": 10, 00:15:20.749 "min_cntlid": 1, 00:15:20.749 "model_number": "SPDK bdev Controller", 00:15:20.749 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:20.749 "serial_number": "SPDK00000000000001" 00:15:20.749 } 00:15:20.749 }, 00:15:20.750 { 00:15:20.750 "method": "nvmf_subsystem_add_host", 00:15:20.750 "params": { 00:15:20.750 "host": "nqn.2016-06.io.spdk:host1", 00:15:20.750 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:20.750 "psk": "/tmp/tmp.kTbHpq3SyG" 00:15:20.750 } 00:15:20.750 }, 00:15:20.750 { 00:15:20.750 "method": "nvmf_subsystem_add_ns", 00:15:20.750 "params": { 00:15:20.750 "namespace": { 00:15:20.750 "bdev_name": "malloc0", 00:15:20.750 "nguid": "9D1BAD7FC4A5462F9BD1218B5D3D5562", 00:15:20.750 "no_auto_visible": false, 00:15:20.750 "nsid": 1, 00:15:20.750 "uuid": "9d1bad7f-c4a5-462f-9bd1-218b5d3d5562" 00:15:20.750 }, 00:15:20.750 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:20.750 } 00:15:20.750 }, 00:15:20.750 { 00:15:20.750 "method": "nvmf_subsystem_add_listener", 00:15:20.750 "params": { 00:15:20.750 "listen_address": { 00:15:20.750 "adrfam": "IPv4", 00:15:20.750 "traddr": "10.0.0.2", 00:15:20.750 "trsvcid": "4420", 00:15:20.750 "trtype": "TCP" 00:15:20.750 }, 00:15:20.750 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:20.750 "secure_channel": true 00:15:20.750 } 00:15:20.750 } 00:15:20.750 ] 00:15:20.750 } 00:15:20.750 ] 00:15:20.750 }' 00:15:20.750 16:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:20.750 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:20.750 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:20.750 16:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84842 00:15:20.750 16:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:20.750 16:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84842 00:15:20.750 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84842 ']' 00:15:20.750 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.750 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:20.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.750 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.750 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:20.750 16:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:20.750 [2024-07-21 16:30:38.838235] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:15:20.750 [2024-07-21 16:30:38.838324] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:21.008 [2024-07-21 16:30:38.965952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.008 [2024-07-21 16:30:39.049791] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:21.008 [2024-07-21 16:30:39.049857] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:21.008 [2024-07-21 16:30:39.049877] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:21.008 [2024-07-21 16:30:39.049885] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:21.008 [2024-07-21 16:30:39.049891] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:21.008 [2024-07-21 16:30:39.049987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.265 [2024-07-21 16:30:39.301672] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:21.265 [2024-07-21 16:30:39.317603] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:21.265 [2024-07-21 16:30:39.333614] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:21.265 [2024-07-21 16:30:39.333829] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:21.830 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:21.830 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:21.830 16:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:21.830 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:21.830 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:21.830 16:30:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:21.830 16:30:39 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=84885 00:15:21.830 16:30:39 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 84885 /var/tmp/bdevperf.sock 00:15:21.830 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84885 ']' 00:15:21.830 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:21.830 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:21.830 16:30:39 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:15:21.830 "subsystems": [ 00:15:21.830 { 00:15:21.830 "subsystem": "keyring", 00:15:21.830 "config": [] 00:15:21.830 }, 00:15:21.830 { 00:15:21.830 "subsystem": "iobuf", 00:15:21.830 "config": [ 00:15:21.830 { 00:15:21.830 "method": "iobuf_set_options", 00:15:21.830 "params": { 00:15:21.830 "large_bufsize": 135168, 00:15:21.830 "large_pool_count": 1024, 00:15:21.830 "small_bufsize": 8192, 00:15:21.830 "small_pool_count": 8192 00:15:21.830 } 00:15:21.830 } 00:15:21.830 ] 00:15:21.830 }, 00:15:21.830 { 00:15:21.830 "subsystem": "sock", 00:15:21.830 "config": [ 00:15:21.830 { 00:15:21.830 "method": "sock_set_default_impl", 00:15:21.830 "params": { 00:15:21.830 "impl_name": "posix" 00:15:21.830 } 00:15:21.830 }, 00:15:21.830 { 00:15:21.830 "method": "sock_impl_set_options", 00:15:21.830 "params": { 00:15:21.830 "enable_ktls": false, 00:15:21.830 "enable_placement_id": 0, 00:15:21.830 "enable_quickack": false, 00:15:21.830 "enable_recv_pipe": true, 00:15:21.830 "enable_zerocopy_send_client": false, 00:15:21.830 "enable_zerocopy_send_server": true, 00:15:21.830 "impl_name": "ssl", 00:15:21.830 "recv_buf_size": 4096, 00:15:21.830 "send_buf_size": 4096, 00:15:21.830 "tls_version": 0, 00:15:21.830 "zerocopy_threshold": 0 00:15:21.830 } 00:15:21.830 }, 00:15:21.830 { 00:15:21.830 "method": "sock_impl_set_options", 00:15:21.830 "params": { 00:15:21.830 "enable_ktls": false, 00:15:21.830 "enable_placement_id": 0, 00:15:21.830 "enable_quickack": false, 00:15:21.830 "enable_recv_pipe": true, 00:15:21.830 "enable_zerocopy_send_client": false, 00:15:21.830 "enable_zerocopy_send_server": true, 00:15:21.830 "impl_name": "posix", 00:15:21.830 "recv_buf_size": 2097152, 00:15:21.830 "send_buf_size": 2097152, 00:15:21.830 "tls_version": 0, 00:15:21.830 "zerocopy_threshold": 0 00:15:21.830 } 00:15:21.830 } 00:15:21.830 ] 00:15:21.830 }, 00:15:21.830 { 00:15:21.830 "subsystem": "vmd", 00:15:21.830 "config": [] 00:15:21.830 }, 00:15:21.830 { 00:15:21.830 "subsystem": "accel", 00:15:21.830 "config": [ 00:15:21.830 { 00:15:21.830 "method": "accel_set_options", 00:15:21.830 "params": { 00:15:21.830 "buf_count": 2048, 00:15:21.830 "large_cache_size": 16, 00:15:21.830 "sequence_count": 2048, 00:15:21.830 "small_cache_size": 128, 00:15:21.830 "task_count": 2048 00:15:21.830 } 00:15:21.830 } 00:15:21.830 ] 00:15:21.830 }, 00:15:21.830 { 00:15:21.830 "subsystem": "bdev", 00:15:21.830 "config": [ 00:15:21.830 { 00:15:21.830 "method": "bdev_set_options", 00:15:21.830 "params": { 00:15:21.830 "bdev_auto_examine": true, 00:15:21.830 "bdev_io_cache_size": 256, 00:15:21.830 "bdev_io_pool_size": 65535, 00:15:21.831 "iobuf_large_cache_size": 16, 00:15:21.831 "iobuf_small_cache_size": 128 00:15:21.831 } 00:15:21.831 }, 00:15:21.831 { 00:15:21.831 "method": "bdev_raid_set_options", 00:15:21.831 "params": { 00:15:21.831 "process_max_bandwidth_mb_sec": 0, 00:15:21.831 "process_window_size_kb": 1024 00:15:21.831 } 00:15:21.831 }, 00:15:21.831 { 00:15:21.831 "method": "bdev_iscsi_set_options", 00:15:21.831 "params": { 00:15:21.831 "timeout_sec": 30 00:15:21.831 } 00:15:21.831 }, 00:15:21.831 { 00:15:21.831 "method": "bdev_nvme_set_options", 00:15:21.831 "params": { 00:15:21.831 "action_on_timeout": "none", 00:15:21.831 "allow_accel_sequence": false, 00:15:21.831 "arbitration_burst": 0, 00:15:21.831 "bdev_retry_count": 3, 00:15:21.831 "ctrlr_loss_timeout_sec": 0, 00:15:21.831 "delay_cmd_submit": true, 00:15:21.831 "dhchap_dhgroups": [ 00:15:21.831 "null", 00:15:21.831 "ffdhe2048", 00:15:21.831 "ffdhe3072", 00:15:21.831 "ffdhe4096", 00:15:21.831 "ffdhe6144", 00:15:21.831 "ffdhe8192" 00:15:21.831 ], 00:15:21.831 "dhchap_digests": [ 00:15:21.831 "sha256", 00:15:21.831 "sha384", 00:15:21.831 "sha512" 00:15:21.831 ], 00:15:21.831 "disable_auto_failback": false, 00:15:21.831 "fast_io_fail_timeout_sec": 0, 00:15:21.831 "generate_uuids": false, 00:15:21.831 "high_priority_weight": 0, 00:15:21.831 "io_path_stat": false, 00:15:21.831 "io_queue_requests": 512, 00:15:21.831 "keep_alive_timeout_ms": 10000, 00:15:21.831 "low_priority_weight": 0, 00:15:21.831 "medium_priority_weight": 0, 00:15:21.831 "nvme_adminq_poll_period_us": 10000, 00:15:21.831 "nvme_error_stat": false, 00:15:21.831 "nvme_ioq_poll_period_us": 0, 00:15:21.831 "rdma_cm_event_timeout_ms": 0, 00:15:21.831 "rdma_max_cq_size": 0, 00:15:21.831 "rdma_srq_size": 0, 00:15:21.831 "reconnect_delay_sec": 0, 00:15:21.831 "timeout_admin_us": 0, 00:15:21.831 "timeout_us": 0, 00:15:21.831 "transport_ack_timeout": 0, 00:15:21.831 "transport_retry_count": 4, 00:15:21.831 "transport_tos": 0 00:15:21.831 } 00:15:21.831 }, 00:15:21.831 { 00:15:21.831 "method": "bdev_nvme_attach_controller", 00:15:21.831 "params": { 00:15:21.831 "adrfam": "IPv4", 00:15:21.831 "ctrlr_loss_timeout_sec": 0, 00:15:21.831 "ddgst": false, 00:15:21.831 "fast_io_fail_timeout_sec": 0, 00:15:21.831 "hdgst": false, 00:15:21.831 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:21.831 "name": "TLSTEST", 00:15:21.831 "prchk_guard": false, 00:15:21.831 "prchk_reftag": false, 00:15:21.831 "psk": "/tmp/tmp.kTbHpq3SyG", 00:15:21.831 "reconnect_delay_sec": 0, 00:15:21.831 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:21.831 "traddr": "10.0.0.2", 00:15:21.831 "trsvcid": "4420", 00:15:21.831 "trtype": "TCP" 00:15:21.831 } 00:15:21.831 }, 00:15:21.831 { 00:15:21.831 "method": "bdev_nvme_set_hotplug", 00:15:21.831 "params": { 00:15:21.831 "enable": false, 00:15:21.831 "period_us": 100000 00:15:21.831 } 00:15:21.831 }, 00:15:21.831 { 00:15:21.831 "method": "bdev_wait_for_examine" 00:15:21.831 } 00:15:21.831 ] 00:15:21.831 }, 00:15:21.831 { 00:15:21.831 "subsystem": "nbd", 00:15:21.831 "config": [] 00:15:21.831 } 00:15:21.831 ] 00:15:21.831 }' 00:15:21.831 16:30:39 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:21.831 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:21.831 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:21.831 16:30:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:21.831 [2024-07-21 16:30:39.849351] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:15:21.831 [2024-07-21 16:30:39.849460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84885 ] 00:15:21.831 [2024-07-21 16:30:39.979118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.088 [2024-07-21 16:30:40.065742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:22.088 [2024-07-21 16:30:40.252374] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:22.088 [2024-07-21 16:30:40.252528] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:22.655 16:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:22.655 16:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:22.655 16:30:40 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:22.912 Running I/O for 10 seconds... 00:15:32.884 00:15:32.884 Latency(us) 00:15:32.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.884 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:32.884 Verification LBA range: start 0x0 length 0x2000 00:15:32.884 TLSTESTn1 : 10.02 4866.90 19.01 0.00 0.00 26252.22 10247.45 20614.05 00:15:32.884 =================================================================================================================== 00:15:32.884 Total : 4866.90 19.01 0.00 0.00 26252.22 10247.45 20614.05 00:15:32.884 0 00:15:32.884 16:30:50 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:32.884 16:30:50 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 84885 00:15:32.884 16:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84885 ']' 00:15:32.884 16:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84885 00:15:32.884 16:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:32.884 16:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:32.884 16:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84885 00:15:32.884 16:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:32.884 16:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:32.884 killing process with pid 84885 00:15:32.884 16:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84885' 00:15:32.884 Received shutdown signal, test time was about 10.000000 seconds 00:15:32.884 00:15:32.884 Latency(us) 00:15:32.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.884 =================================================================================================================== 00:15:32.884 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:32.884 16:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84885 00:15:32.884 [2024-07-21 16:30:50.993356] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:32.884 16:30:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84885 00:15:33.142 16:30:51 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 84842 00:15:33.142 16:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84842 ']' 00:15:33.142 16:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84842 00:15:33.142 16:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:33.142 16:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:33.142 16:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84842 00:15:33.142 16:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:33.142 16:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:33.142 killing process with pid 84842 00:15:33.142 16:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84842' 00:15:33.142 16:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84842 00:15:33.142 [2024-07-21 16:30:51.295494] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:33.142 16:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84842 00:15:33.411 16:30:51 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:15:33.411 16:30:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:33.411 16:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:33.411 16:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.411 16:30:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85037 00:15:33.411 16:30:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:33.411 16:30:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85037 00:15:33.411 16:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85037 ']' 00:15:33.411 16:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.411 16:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:33.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.411 16:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.411 16:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:33.411 16:30:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.702 [2024-07-21 16:30:51.647010] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:15:33.702 [2024-07-21 16:30:51.647124] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.702 [2024-07-21 16:30:51.791143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.702 [2024-07-21 16:30:51.906921] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.702 [2024-07-21 16:30:51.906997] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.702 [2024-07-21 16:30:51.907013] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.702 [2024-07-21 16:30:51.907024] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.702 [2024-07-21 16:30:51.907033] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.702 [2024-07-21 16:30:51.907062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.637 16:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:34.637 16:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:34.637 16:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:34.637 16:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:34.637 16:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:34.637 16:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.637 16:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.kTbHpq3SyG 00:15:34.637 16:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.kTbHpq3SyG 00:15:34.637 16:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:34.637 [2024-07-21 16:30:52.811537] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.637 16:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:34.894 16:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:35.152 [2024-07-21 16:30:53.215570] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:35.152 [2024-07-21 16:30:53.215862] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:35.152 16:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:35.410 malloc0 00:15:35.410 16:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:35.668 16:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kTbHpq3SyG 00:15:35.927 [2024-07-21 16:30:53.909481] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:35.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:35.927 16:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=85134 00:15:35.927 16:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:35.927 16:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:35.927 16:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 85134 /var/tmp/bdevperf.sock 00:15:35.927 16:30:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85134 ']' 00:15:35.927 16:30:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:35.927 16:30:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:35.927 16:30:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:35.927 16:30:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:35.927 16:30:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:35.927 [2024-07-21 16:30:53.979400] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:15:35.927 [2024-07-21 16:30:53.979509] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85134 ] 00:15:35.927 [2024-07-21 16:30:54.113141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.185 [2024-07-21 16:30:54.208419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.750 16:30:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:36.750 16:30:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:36.750 16:30:54 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kTbHpq3SyG 00:15:37.008 16:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:37.266 [2024-07-21 16:30:55.357704] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:37.266 nvme0n1 00:15:37.266 16:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:37.523 Running I/O for 1 seconds... 00:15:38.456 00:15:38.456 Latency(us) 00:15:38.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.456 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:38.456 Verification LBA range: start 0x0 length 0x2000 00:15:38.456 nvme0n1 : 1.02 3925.46 15.33 0.00 0.00 32150.72 5332.25 20256.58 00:15:38.456 =================================================================================================================== 00:15:38.456 Total : 3925.46 15.33 0.00 0.00 32150.72 5332.25 20256.58 00:15:38.456 0 00:15:38.456 16:30:56 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 85134 00:15:38.456 16:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85134 ']' 00:15:38.456 16:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85134 00:15:38.456 16:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:38.456 16:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:38.456 16:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85134 00:15:38.456 killing process with pid 85134 00:15:38.456 Received shutdown signal, test time was about 1.000000 seconds 00:15:38.456 00:15:38.456 Latency(us) 00:15:38.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.456 =================================================================================================================== 00:15:38.456 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:38.456 16:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:38.456 16:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:38.456 16:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85134' 00:15:38.457 16:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85134 00:15:38.457 16:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85134 00:15:38.714 16:30:56 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 85037 00:15:38.714 16:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85037 ']' 00:15:38.714 16:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85037 00:15:38.714 16:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:38.714 16:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:38.714 16:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85037 00:15:38.714 killing process with pid 85037 00:15:38.714 16:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:38.714 16:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:38.714 16:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85037' 00:15:38.714 16:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85037 00:15:38.714 [2024-07-21 16:30:56.903215] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:38.714 16:30:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85037 00:15:39.279 16:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:15:39.279 16:30:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:39.279 16:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:39.279 16:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:39.279 16:30:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85204 00:15:39.279 16:30:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:39.279 16:30:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85204 00:15:39.279 16:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85204 ']' 00:15:39.279 16:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.279 16:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:39.279 16:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.279 16:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:39.279 16:30:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:39.279 [2024-07-21 16:30:57.264656] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:15:39.279 [2024-07-21 16:30:57.264772] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.279 [2024-07-21 16:30:57.402991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.536 [2024-07-21 16:30:57.488855] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.536 [2024-07-21 16:30:57.488927] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.536 [2024-07-21 16:30:57.488938] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.536 [2024-07-21 16:30:57.488953] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.536 [2024-07-21 16:30:57.488960] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.536 [2024-07-21 16:30:57.488993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.102 16:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:40.102 16:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:40.102 16:30:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:40.102 16:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:40.102 16:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:40.102 16:30:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:40.102 16:30:58 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:15:40.102 16:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.102 16:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:40.102 [2024-07-21 16:30:58.207691] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:40.102 malloc0 00:15:40.102 [2024-07-21 16:30:58.241422] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:40.102 [2024-07-21 16:30:58.241654] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:40.102 16:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.102 16:30:58 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=85254 00:15:40.102 16:30:58 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:40.102 16:30:58 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 85254 /var/tmp/bdevperf.sock 00:15:40.102 16:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85254 ']' 00:15:40.102 16:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:40.102 16:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:40.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:40.102 16:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:40.102 16:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:40.102 16:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:40.359 [2024-07-21 16:30:58.318641] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:15:40.359 [2024-07-21 16:30:58.318743] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85254 ] 00:15:40.360 [2024-07-21 16:30:58.454414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.360 [2024-07-21 16:30:58.565002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.291 16:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:41.291 16:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:41.291 16:30:59 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kTbHpq3SyG 00:15:41.291 16:30:59 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:41.549 [2024-07-21 16:30:59.695432] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:41.807 nvme0n1 00:15:41.807 16:30:59 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:41.807 Running I/O for 1 seconds... 00:15:42.741 00:15:42.741 Latency(us) 00:15:42.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.741 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:42.742 Verification LBA range: start 0x0 length 0x2000 00:15:42.742 nvme0n1 : 1.03 3943.30 15.40 0.00 0.00 32007.73 8698.41 24188.74 00:15:42.742 =================================================================================================================== 00:15:42.742 Total : 3943.30 15.40 0.00 0.00 32007.73 8698.41 24188.74 00:15:42.742 0 00:15:42.742 16:31:00 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:15:42.742 16:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.742 16:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:43.000 16:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.000 16:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:15:43.000 "subsystems": [ 00:15:43.000 { 00:15:43.000 "subsystem": "keyring", 00:15:43.000 "config": [ 00:15:43.000 { 00:15:43.000 "method": "keyring_file_add_key", 00:15:43.000 "params": { 00:15:43.000 "name": "key0", 00:15:43.000 "path": "/tmp/tmp.kTbHpq3SyG" 00:15:43.000 } 00:15:43.000 } 00:15:43.000 ] 00:15:43.000 }, 00:15:43.000 { 00:15:43.000 "subsystem": "iobuf", 00:15:43.000 "config": [ 00:15:43.000 { 00:15:43.000 "method": "iobuf_set_options", 00:15:43.000 "params": { 00:15:43.000 "large_bufsize": 135168, 00:15:43.000 "large_pool_count": 1024, 00:15:43.000 "small_bufsize": 8192, 00:15:43.000 "small_pool_count": 8192 00:15:43.000 } 00:15:43.000 } 00:15:43.000 ] 00:15:43.000 }, 00:15:43.000 { 00:15:43.000 "subsystem": "sock", 00:15:43.000 "config": [ 00:15:43.000 { 00:15:43.000 "method": "sock_set_default_impl", 00:15:43.000 "params": { 00:15:43.000 "impl_name": "posix" 00:15:43.000 } 00:15:43.000 }, 00:15:43.000 { 00:15:43.000 "method": "sock_impl_set_options", 00:15:43.000 "params": { 00:15:43.000 "enable_ktls": false, 00:15:43.000 "enable_placement_id": 0, 00:15:43.000 "enable_quickack": false, 00:15:43.000 "enable_recv_pipe": true, 00:15:43.000 "enable_zerocopy_send_client": false, 00:15:43.000 "enable_zerocopy_send_server": true, 00:15:43.000 "impl_name": "ssl", 00:15:43.000 "recv_buf_size": 4096, 00:15:43.000 "send_buf_size": 4096, 00:15:43.000 "tls_version": 0, 00:15:43.000 "zerocopy_threshold": 0 00:15:43.000 } 00:15:43.000 }, 00:15:43.000 { 00:15:43.000 "method": "sock_impl_set_options", 00:15:43.000 "params": { 00:15:43.000 "enable_ktls": false, 00:15:43.000 "enable_placement_id": 0, 00:15:43.000 "enable_quickack": false, 00:15:43.000 "enable_recv_pipe": true, 00:15:43.000 "enable_zerocopy_send_client": false, 00:15:43.000 "enable_zerocopy_send_server": true, 00:15:43.000 "impl_name": "posix", 00:15:43.000 "recv_buf_size": 2097152, 00:15:43.000 "send_buf_size": 2097152, 00:15:43.000 "tls_version": 0, 00:15:43.000 "zerocopy_threshold": 0 00:15:43.000 } 00:15:43.000 } 00:15:43.000 ] 00:15:43.000 }, 00:15:43.000 { 00:15:43.000 "subsystem": "vmd", 00:15:43.000 "config": [] 00:15:43.000 }, 00:15:43.000 { 00:15:43.000 "subsystem": "accel", 00:15:43.000 "config": [ 00:15:43.000 { 00:15:43.000 "method": "accel_set_options", 00:15:43.000 "params": { 00:15:43.000 "buf_count": 2048, 00:15:43.000 "large_cache_size": 16, 00:15:43.000 "sequence_count": 2048, 00:15:43.000 "small_cache_size": 128, 00:15:43.000 "task_count": 2048 00:15:43.000 } 00:15:43.000 } 00:15:43.001 ] 00:15:43.001 }, 00:15:43.001 { 00:15:43.001 "subsystem": "bdev", 00:15:43.001 "config": [ 00:15:43.001 { 00:15:43.001 "method": "bdev_set_options", 00:15:43.001 "params": { 00:15:43.001 "bdev_auto_examine": true, 00:15:43.001 "bdev_io_cache_size": 256, 00:15:43.001 "bdev_io_pool_size": 65535, 00:15:43.001 "iobuf_large_cache_size": 16, 00:15:43.001 "iobuf_small_cache_size": 128 00:15:43.001 } 00:15:43.001 }, 00:15:43.001 { 00:15:43.001 "method": "bdev_raid_set_options", 00:15:43.001 "params": { 00:15:43.001 "process_max_bandwidth_mb_sec": 0, 00:15:43.001 "process_window_size_kb": 1024 00:15:43.001 } 00:15:43.001 }, 00:15:43.001 { 00:15:43.001 "method": "bdev_iscsi_set_options", 00:15:43.001 "params": { 00:15:43.001 "timeout_sec": 30 00:15:43.001 } 00:15:43.001 }, 00:15:43.001 { 00:15:43.001 "method": "bdev_nvme_set_options", 00:15:43.001 "params": { 00:15:43.001 "action_on_timeout": "none", 00:15:43.001 "allow_accel_sequence": false, 00:15:43.001 "arbitration_burst": 0, 00:15:43.001 "bdev_retry_count": 3, 00:15:43.001 "ctrlr_loss_timeout_sec": 0, 00:15:43.001 "delay_cmd_submit": true, 00:15:43.001 "dhchap_dhgroups": [ 00:15:43.001 "null", 00:15:43.001 "ffdhe2048", 00:15:43.001 "ffdhe3072", 00:15:43.001 "ffdhe4096", 00:15:43.001 "ffdhe6144", 00:15:43.001 "ffdhe8192" 00:15:43.001 ], 00:15:43.001 "dhchap_digests": [ 00:15:43.001 "sha256", 00:15:43.001 "sha384", 00:15:43.001 "sha512" 00:15:43.001 ], 00:15:43.001 "disable_auto_failback": false, 00:15:43.001 "fast_io_fail_timeout_sec": 0, 00:15:43.001 "generate_uuids": false, 00:15:43.001 "high_priority_weight": 0, 00:15:43.001 "io_path_stat": false, 00:15:43.001 "io_queue_requests": 0, 00:15:43.001 "keep_alive_timeout_ms": 10000, 00:15:43.001 "low_priority_weight": 0, 00:15:43.001 "medium_priority_weight": 0, 00:15:43.001 "nvme_adminq_poll_period_us": 10000, 00:15:43.001 "nvme_error_stat": false, 00:15:43.001 "nvme_ioq_poll_period_us": 0, 00:15:43.001 "rdma_cm_event_timeout_ms": 0, 00:15:43.001 "rdma_max_cq_size": 0, 00:15:43.001 "rdma_srq_size": 0, 00:15:43.001 "reconnect_delay_sec": 0, 00:15:43.001 "timeout_admin_us": 0, 00:15:43.001 "timeout_us": 0, 00:15:43.001 "transport_ack_timeout": 0, 00:15:43.001 "transport_retry_count": 4, 00:15:43.001 "transport_tos": 0 00:15:43.001 } 00:15:43.001 }, 00:15:43.001 { 00:15:43.001 "method": "bdev_nvme_set_hotplug", 00:15:43.001 "params": { 00:15:43.001 "enable": false, 00:15:43.001 "period_us": 100000 00:15:43.001 } 00:15:43.001 }, 00:15:43.001 { 00:15:43.001 "method": "bdev_malloc_create", 00:15:43.001 "params": { 00:15:43.001 "block_size": 4096, 00:15:43.001 "name": "malloc0", 00:15:43.001 "num_blocks": 8192, 00:15:43.001 "optimal_io_boundary": 0, 00:15:43.001 "physical_block_size": 4096, 00:15:43.001 "uuid": "896af818-cde6-4f25-a778-e933a34be7b4" 00:15:43.001 } 00:15:43.001 }, 00:15:43.001 { 00:15:43.001 "method": "bdev_wait_for_examine" 00:15:43.001 } 00:15:43.001 ] 00:15:43.001 }, 00:15:43.001 { 00:15:43.001 "subsystem": "nbd", 00:15:43.001 "config": [] 00:15:43.001 }, 00:15:43.001 { 00:15:43.001 "subsystem": "scheduler", 00:15:43.001 "config": [ 00:15:43.001 { 00:15:43.001 "method": "framework_set_scheduler", 00:15:43.001 "params": { 00:15:43.001 "name": "static" 00:15:43.001 } 00:15:43.001 } 00:15:43.001 ] 00:15:43.001 }, 00:15:43.001 { 00:15:43.001 "subsystem": "nvmf", 00:15:43.001 "config": [ 00:15:43.001 { 00:15:43.001 "method": "nvmf_set_config", 00:15:43.001 "params": { 00:15:43.001 "admin_cmd_passthru": { 00:15:43.001 "identify_ctrlr": false 00:15:43.001 }, 00:15:43.001 "discovery_filter": "match_any" 00:15:43.001 } 00:15:43.001 }, 00:15:43.001 { 00:15:43.001 "method": "nvmf_set_max_subsystems", 00:15:43.001 "params": { 00:15:43.001 "max_subsystems": 1024 00:15:43.001 } 00:15:43.001 }, 00:15:43.001 { 00:15:43.001 "method": "nvmf_set_crdt", 00:15:43.001 "params": { 00:15:43.001 "crdt1": 0, 00:15:43.001 "crdt2": 0, 00:15:43.001 "crdt3": 0 00:15:43.001 } 00:15:43.001 }, 00:15:43.001 { 00:15:43.001 "method": "nvmf_create_transport", 00:15:43.001 "params": { 00:15:43.001 "abort_timeout_sec": 1, 00:15:43.001 "ack_timeout": 0, 00:15:43.001 "buf_cache_size": 4294967295, 00:15:43.001 "c2h_success": false, 00:15:43.001 "data_wr_pool_size": 0, 00:15:43.001 "dif_insert_or_strip": false, 00:15:43.001 "in_capsule_data_size": 4096, 00:15:43.001 "io_unit_size": 131072, 00:15:43.001 "max_aq_depth": 128, 00:15:43.001 "max_io_qpairs_per_ctrlr": 127, 00:15:43.001 "max_io_size": 131072, 00:15:43.001 "max_queue_depth": 128, 00:15:43.001 "num_shared_buffers": 511, 00:15:43.001 "sock_priority": 0, 00:15:43.001 "trtype": "TCP", 00:15:43.001 "zcopy": false 00:15:43.001 } 00:15:43.001 }, 00:15:43.001 { 00:15:43.001 "method": "nvmf_create_subsystem", 00:15:43.001 "params": { 00:15:43.001 "allow_any_host": false, 00:15:43.001 "ana_reporting": false, 00:15:43.001 "max_cntlid": 65519, 00:15:43.001 "max_namespaces": 32, 00:15:43.001 "min_cntlid": 1, 00:15:43.001 "model_number": "SPDK bdev Controller", 00:15:43.001 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:43.001 "serial_number": "00000000000000000000" 00:15:43.001 } 00:15:43.001 }, 00:15:43.001 { 00:15:43.001 "method": "nvmf_subsystem_add_host", 00:15:43.001 "params": { 00:15:43.001 "host": "nqn.2016-06.io.spdk:host1", 00:15:43.001 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:43.001 "psk": "key0" 00:15:43.001 } 00:15:43.001 }, 00:15:43.001 { 00:15:43.001 "method": "nvmf_subsystem_add_ns", 00:15:43.001 "params": { 00:15:43.001 "namespace": { 00:15:43.001 "bdev_name": "malloc0", 00:15:43.001 "nguid": "896AF818CDE64F25A778E933A34BE7B4", 00:15:43.001 "no_auto_visible": false, 00:15:43.001 "nsid": 1, 00:15:43.001 "uuid": "896af818-cde6-4f25-a778-e933a34be7b4" 00:15:43.001 }, 00:15:43.001 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:43.001 } 00:15:43.001 }, 00:15:43.001 { 00:15:43.001 "method": "nvmf_subsystem_add_listener", 00:15:43.001 "params": { 00:15:43.001 "listen_address": { 00:15:43.001 "adrfam": "IPv4", 00:15:43.001 "traddr": "10.0.0.2", 00:15:43.001 "trsvcid": "4420", 00:15:43.001 "trtype": "TCP" 00:15:43.001 }, 00:15:43.001 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:43.001 "secure_channel": false, 00:15:43.001 "sock_impl": "ssl" 00:15:43.001 } 00:15:43.001 } 00:15:43.001 ] 00:15:43.001 } 00:15:43.001 ] 00:15:43.001 }' 00:15:43.001 16:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:43.260 16:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:15:43.260 "subsystems": [ 00:15:43.260 { 00:15:43.260 "subsystem": "keyring", 00:15:43.260 "config": [ 00:15:43.260 { 00:15:43.260 "method": "keyring_file_add_key", 00:15:43.260 "params": { 00:15:43.260 "name": "key0", 00:15:43.260 "path": "/tmp/tmp.kTbHpq3SyG" 00:15:43.260 } 00:15:43.260 } 00:15:43.260 ] 00:15:43.260 }, 00:15:43.260 { 00:15:43.260 "subsystem": "iobuf", 00:15:43.260 "config": [ 00:15:43.260 { 00:15:43.260 "method": "iobuf_set_options", 00:15:43.260 "params": { 00:15:43.260 "large_bufsize": 135168, 00:15:43.260 "large_pool_count": 1024, 00:15:43.260 "small_bufsize": 8192, 00:15:43.260 "small_pool_count": 8192 00:15:43.260 } 00:15:43.260 } 00:15:43.260 ] 00:15:43.260 }, 00:15:43.260 { 00:15:43.260 "subsystem": "sock", 00:15:43.260 "config": [ 00:15:43.260 { 00:15:43.260 "method": "sock_set_default_impl", 00:15:43.260 "params": { 00:15:43.260 "impl_name": "posix" 00:15:43.260 } 00:15:43.260 }, 00:15:43.260 { 00:15:43.260 "method": "sock_impl_set_options", 00:15:43.260 "params": { 00:15:43.260 "enable_ktls": false, 00:15:43.260 "enable_placement_id": 0, 00:15:43.260 "enable_quickack": false, 00:15:43.260 "enable_recv_pipe": true, 00:15:43.260 "enable_zerocopy_send_client": false, 00:15:43.260 "enable_zerocopy_send_server": true, 00:15:43.260 "impl_name": "ssl", 00:15:43.260 "recv_buf_size": 4096, 00:15:43.260 "send_buf_size": 4096, 00:15:43.260 "tls_version": 0, 00:15:43.260 "zerocopy_threshold": 0 00:15:43.260 } 00:15:43.260 }, 00:15:43.260 { 00:15:43.260 "method": "sock_impl_set_options", 00:15:43.260 "params": { 00:15:43.260 "enable_ktls": false, 00:15:43.260 "enable_placement_id": 0, 00:15:43.260 "enable_quickack": false, 00:15:43.260 "enable_recv_pipe": true, 00:15:43.260 "enable_zerocopy_send_client": false, 00:15:43.260 "enable_zerocopy_send_server": true, 00:15:43.260 "impl_name": "posix", 00:15:43.260 "recv_buf_size": 2097152, 00:15:43.260 "send_buf_size": 2097152, 00:15:43.260 "tls_version": 0, 00:15:43.260 "zerocopy_threshold": 0 00:15:43.260 } 00:15:43.260 } 00:15:43.260 ] 00:15:43.260 }, 00:15:43.260 { 00:15:43.260 "subsystem": "vmd", 00:15:43.260 "config": [] 00:15:43.260 }, 00:15:43.260 { 00:15:43.260 "subsystem": "accel", 00:15:43.260 "config": [ 00:15:43.260 { 00:15:43.260 "method": "accel_set_options", 00:15:43.260 "params": { 00:15:43.260 "buf_count": 2048, 00:15:43.260 "large_cache_size": 16, 00:15:43.260 "sequence_count": 2048, 00:15:43.260 "small_cache_size": 128, 00:15:43.260 "task_count": 2048 00:15:43.260 } 00:15:43.260 } 00:15:43.260 ] 00:15:43.260 }, 00:15:43.260 { 00:15:43.260 "subsystem": "bdev", 00:15:43.261 "config": [ 00:15:43.261 { 00:15:43.261 "method": "bdev_set_options", 00:15:43.261 "params": { 00:15:43.261 "bdev_auto_examine": true, 00:15:43.261 "bdev_io_cache_size": 256, 00:15:43.261 "bdev_io_pool_size": 65535, 00:15:43.261 "iobuf_large_cache_size": 16, 00:15:43.261 "iobuf_small_cache_size": 128 00:15:43.261 } 00:15:43.261 }, 00:15:43.261 { 00:15:43.261 "method": "bdev_raid_set_options", 00:15:43.261 "params": { 00:15:43.261 "process_max_bandwidth_mb_sec": 0, 00:15:43.261 "process_window_size_kb": 1024 00:15:43.261 } 00:15:43.261 }, 00:15:43.261 { 00:15:43.261 "method": "bdev_iscsi_set_options", 00:15:43.261 "params": { 00:15:43.261 "timeout_sec": 30 00:15:43.261 } 00:15:43.261 }, 00:15:43.261 { 00:15:43.261 "method": "bdev_nvme_set_options", 00:15:43.261 "params": { 00:15:43.261 "action_on_timeout": "none", 00:15:43.261 "allow_accel_sequence": false, 00:15:43.261 "arbitration_burst": 0, 00:15:43.261 "bdev_retry_count": 3, 00:15:43.261 "ctrlr_loss_timeout_sec": 0, 00:15:43.261 "delay_cmd_submit": true, 00:15:43.261 "dhchap_dhgroups": [ 00:15:43.261 "null", 00:15:43.261 "ffdhe2048", 00:15:43.261 "ffdhe3072", 00:15:43.261 "ffdhe4096", 00:15:43.261 "ffdhe6144", 00:15:43.261 "ffdhe8192" 00:15:43.261 ], 00:15:43.261 "dhchap_digests": [ 00:15:43.261 "sha256", 00:15:43.261 "sha384", 00:15:43.261 "sha512" 00:15:43.261 ], 00:15:43.261 "disable_auto_failback": false, 00:15:43.261 "fast_io_fail_timeout_sec": 0, 00:15:43.261 "generate_uuids": false, 00:15:43.261 "high_priority_weight": 0, 00:15:43.261 "io_path_stat": false, 00:15:43.261 "io_queue_requests": 512, 00:15:43.261 "keep_alive_timeout_ms": 10000, 00:15:43.261 "low_priority_weight": 0, 00:15:43.261 "medium_priority_weight": 0, 00:15:43.261 "nvme_adminq_poll_period_us": 10000, 00:15:43.261 "nvme_error_stat": false, 00:15:43.261 "nvme_ioq_poll_period_us": 0, 00:15:43.261 "rdma_cm_event_timeout_ms": 0, 00:15:43.261 "rdma_max_cq_size": 0, 00:15:43.261 "rdma_srq_size": 0, 00:15:43.261 "reconnect_delay_sec": 0, 00:15:43.261 "timeout_admin_us": 0, 00:15:43.261 "timeout_us": 0, 00:15:43.261 "transport_ack_timeout": 0, 00:15:43.261 "transport_retry_count": 4, 00:15:43.261 "transport_tos": 0 00:15:43.261 } 00:15:43.261 }, 00:15:43.261 { 00:15:43.261 "method": "bdev_nvme_attach_controller", 00:15:43.261 "params": { 00:15:43.261 "adrfam": "IPv4", 00:15:43.261 "ctrlr_loss_timeout_sec": 0, 00:15:43.261 "ddgst": false, 00:15:43.261 "fast_io_fail_timeout_sec": 0, 00:15:43.261 "hdgst": false, 00:15:43.261 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:43.261 "name": "nvme0", 00:15:43.261 "prchk_guard": false, 00:15:43.261 "prchk_reftag": false, 00:15:43.261 "psk": "key0", 00:15:43.261 "reconnect_delay_sec": 0, 00:15:43.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:43.261 "traddr": "10.0.0.2", 00:15:43.261 "trsvcid": "4420", 00:15:43.261 "trtype": "TCP" 00:15:43.261 } 00:15:43.261 }, 00:15:43.261 { 00:15:43.261 "method": "bdev_nvme_set_hotplug", 00:15:43.261 "params": { 00:15:43.261 "enable": false, 00:15:43.261 "period_us": 100000 00:15:43.261 } 00:15:43.261 }, 00:15:43.261 { 00:15:43.261 "method": "bdev_enable_histogram", 00:15:43.261 "params": { 00:15:43.261 "enable": true, 00:15:43.261 "name": "nvme0n1" 00:15:43.261 } 00:15:43.261 }, 00:15:43.261 { 00:15:43.261 "method": "bdev_wait_for_examine" 00:15:43.261 } 00:15:43.261 ] 00:15:43.261 }, 00:15:43.261 { 00:15:43.261 "subsystem": "nbd", 00:15:43.261 "config": [] 00:15:43.261 } 00:15:43.261 ] 00:15:43.261 }' 00:15:43.261 16:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 85254 00:15:43.261 16:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85254 ']' 00:15:43.261 16:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85254 00:15:43.261 16:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:43.261 16:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:43.261 16:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85254 00:15:43.261 16:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:43.261 16:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:43.261 killing process with pid 85254 00:15:43.261 16:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85254' 00:15:43.261 Received shutdown signal, test time was about 1.000000 seconds 00:15:43.261 00:15:43.261 Latency(us) 00:15:43.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.261 =================================================================================================================== 00:15:43.261 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:43.261 16:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85254 00:15:43.261 16:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85254 00:15:43.520 16:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 85204 00:15:43.520 16:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85204 ']' 00:15:43.520 16:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85204 00:15:43.520 16:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:43.520 16:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:43.520 16:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85204 00:15:43.779 16:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:43.779 16:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:43.779 killing process with pid 85204 00:15:43.779 16:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85204' 00:15:43.779 16:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85204 00:15:43.779 16:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85204 00:15:44.038 16:31:02 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:15:44.038 16:31:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:44.038 16:31:02 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:15:44.038 "subsystems": [ 00:15:44.038 { 00:15:44.038 "subsystem": "keyring", 00:15:44.038 "config": [ 00:15:44.038 { 00:15:44.038 "method": "keyring_file_add_key", 00:15:44.038 "params": { 00:15:44.038 "name": "key0", 00:15:44.038 "path": "/tmp/tmp.kTbHpq3SyG" 00:15:44.038 } 00:15:44.038 } 00:15:44.038 ] 00:15:44.038 }, 00:15:44.038 { 00:15:44.038 "subsystem": "iobuf", 00:15:44.038 "config": [ 00:15:44.038 { 00:15:44.038 "method": "iobuf_set_options", 00:15:44.038 "params": { 00:15:44.038 "large_bufsize": 135168, 00:15:44.038 "large_pool_count": 1024, 00:15:44.038 "small_bufsize": 8192, 00:15:44.038 "small_pool_count": 8192 00:15:44.038 } 00:15:44.038 } 00:15:44.038 ] 00:15:44.038 }, 00:15:44.038 { 00:15:44.038 "subsystem": "sock", 00:15:44.038 "config": [ 00:15:44.038 { 00:15:44.038 "method": "sock_set_default_impl", 00:15:44.038 "params": { 00:15:44.038 "impl_name": "posix" 00:15:44.038 } 00:15:44.038 }, 00:15:44.038 { 00:15:44.038 "method": "sock_impl_set_options", 00:15:44.038 "params": { 00:15:44.038 "enable_ktls": false, 00:15:44.038 "enable_placement_id": 0, 00:15:44.038 "enable_quickack": false, 00:15:44.038 "enable_recv_pipe": true, 00:15:44.038 "enable_zerocopy_send_client": false, 00:15:44.038 "enable_zerocopy_send_server": true, 00:15:44.038 "impl_name": "ssl", 00:15:44.038 "recv_buf_size": 4096, 00:15:44.038 "send_buf_size": 4096, 00:15:44.038 "tls_version": 0, 00:15:44.038 "zerocopy_threshold": 0 00:15:44.038 } 00:15:44.038 }, 00:15:44.038 { 00:15:44.038 "method": "sock_impl_set_options", 00:15:44.038 "params": { 00:15:44.038 "enable_ktls": false, 00:15:44.038 "enable_placement_id": 0, 00:15:44.038 "enable_quickack": false, 00:15:44.038 "enable_recv_pipe": true, 00:15:44.038 "enable_zerocopy_send_client": false, 00:15:44.038 "enable_zerocopy_send_server": true, 00:15:44.038 "impl_name": "posix", 00:15:44.038 "recv_buf_size": 2097152, 00:15:44.038 "send_buf_size": 2097152, 00:15:44.038 "tls_version": 0, 00:15:44.038 "zerocopy_threshold": 0 00:15:44.038 } 00:15:44.038 } 00:15:44.038 ] 00:15:44.038 }, 00:15:44.038 { 00:15:44.038 "subsystem": "vmd", 00:15:44.038 "config": [] 00:15:44.038 }, 00:15:44.038 { 00:15:44.038 "subsystem": "accel", 00:15:44.038 "config": [ 00:15:44.038 { 00:15:44.038 "method": "accel_set_options", 00:15:44.038 "params": { 00:15:44.038 "buf_count": 2048, 00:15:44.038 "large_cache_size": 16, 00:15:44.038 "sequence_count": 2048, 00:15:44.038 "small_cache_size": 128, 00:15:44.038 "task_count": 2048 00:15:44.038 } 00:15:44.038 } 00:15:44.038 ] 00:15:44.038 }, 00:15:44.038 { 00:15:44.038 "subsystem": "bdev", 00:15:44.038 "config": [ 00:15:44.038 { 00:15:44.038 "method": "bdev_set_options", 00:15:44.038 "params": { 00:15:44.038 "bdev_auto_examine": true, 00:15:44.038 "bdev_io_cache_size": 256, 00:15:44.038 "bdev_io_pool_size": 65535, 00:15:44.038 "iobuf_large_cache_size": 16, 00:15:44.038 "iobuf_small_cache_size": 128 00:15:44.038 } 00:15:44.038 }, 00:15:44.038 { 00:15:44.038 "method": "bdev_raid_set_options", 00:15:44.038 "params": { 00:15:44.038 "process_max_bandwidth_mb_sec": 0, 00:15:44.038 "process_window_size_kb": 1024 00:15:44.038 } 00:15:44.038 }, 00:15:44.038 { 00:15:44.038 "method": "bdev_iscsi_set_options", 00:15:44.038 "params": { 00:15:44.038 "timeout_sec": 30 00:15:44.038 } 00:15:44.038 }, 00:15:44.038 { 00:15:44.038 "method": "bdev_nvme_set_options", 00:15:44.038 "params": { 00:15:44.038 "action_on_timeout": "none", 00:15:44.038 "allow_accel_sequence": false, 00:15:44.038 "arbitration_burst": 0, 00:15:44.038 "bdev_retry_count": 3, 00:15:44.038 "ctrlr_loss_timeout_sec": 0, 00:15:44.038 "delay_cmd_submit": true, 00:15:44.038 "dhchap_dhgroups": [ 00:15:44.038 "null", 00:15:44.038 "ffdhe2048", 00:15:44.038 "ffdhe3072", 00:15:44.038 "ffdhe4096", 00:15:44.038 "ffdhe6144", 00:15:44.038 "ffdhe8192" 00:15:44.038 ], 00:15:44.038 "dhchap_digests": [ 00:15:44.038 "sha256", 00:15:44.038 "sha384", 00:15:44.038 "sha512" 00:15:44.038 ], 00:15:44.038 "disable_auto_failback": false, 00:15:44.038 "fast_io_fail_timeout_sec": 0, 00:15:44.038 "generate_uuids": false, 00:15:44.038 "high_priority_weight": 0, 00:15:44.038 "io_path_stat": false, 00:15:44.038 "io_queue_requests": 0, 00:15:44.038 "keep_alive_timeout_ms": 10000, 00:15:44.038 "low_priority_weight": 0, 00:15:44.038 "medium_priority_weight": 0, 00:15:44.038 "nvme_adminq_poll_period_us": 10000, 00:15:44.038 "nvme_error_stat": false, 00:15:44.038 "nvme_ioq_poll_period_us": 0, 00:15:44.038 "rdma_cm_event_timeout_ms": 0, 00:15:44.038 "rdma_max_cq_size": 0, 00:15:44.038 "rdma_srq_size": 0, 00:15:44.038 "reconnect_delay_sec": 0, 00:15:44.038 "timeout_admin_us": 0, 00:15:44.038 "timeout_us": 0, 00:15:44.038 "transport_ack_timeout": 0, 00:15:44.038 "transport_retry_count": 4, 00:15:44.039 "transport_tos": 0 00:15:44.039 } 00:15:44.039 }, 00:15:44.039 { 00:15:44.039 "method": "bdev_nvme_set_hotplug", 00:15:44.039 "params": { 00:15:44.039 "enable": false, 00:15:44.039 "period_us": 100000 00:15:44.039 } 00:15:44.039 }, 00:15:44.039 { 00:15:44.039 "method": "bdev_malloc_create", 00:15:44.039 "params": { 00:15:44.039 "block_size": 4096, 00:15:44.039 "name": "malloc0", 00:15:44.039 "num_blocks": 8192, 00:15:44.039 "optimal_io_boundary": 0, 00:15:44.039 "physical_block_size": 4096, 00:15:44.039 "uuid": "896af818-cde6-4f25-a778-e933a34be7b4" 00:15:44.039 } 00:15:44.039 }, 00:15:44.039 { 00:15:44.039 "method": "bdev_wait_for_examine" 00:15:44.039 } 00:15:44.039 ] 00:15:44.039 }, 00:15:44.039 { 00:15:44.039 "subsystem": "nbd", 00:15:44.039 "config": [] 00:15:44.039 }, 00:15:44.039 { 00:15:44.039 "subsystem": "scheduler", 00:15:44.039 "config": [ 00:15:44.039 { 00:15:44.039 "method": "framework_set_scheduler", 00:15:44.039 "params": { 00:15:44.039 "name": "static" 00:15:44.039 } 00:15:44.039 } 00:15:44.039 ] 00:15:44.039 }, 00:15:44.039 { 00:15:44.039 "subsystem": "nvmf", 00:15:44.039 "config": [ 00:15:44.039 { 00:15:44.039 "method": "nvmf_set_config", 00:15:44.039 "params": { 00:15:44.039 "admin_cmd_passthru": { 00:15:44.039 "identify_ctrlr": false 00:15:44.039 }, 00:15:44.039 "discovery_filter": "match_any" 00:15:44.039 } 00:15:44.039 }, 00:15:44.039 { 00:15:44.039 "method": "nvmf_set_max_subsystems", 00:15:44.039 "params": { 00:15:44.039 "max_subsystems": 1024 00:15:44.039 } 00:15:44.039 }, 00:15:44.039 { 00:15:44.039 "method": "nvmf_set_crdt", 00:15:44.039 "params": { 00:15:44.039 "crdt1": 0, 00:15:44.039 "crdt2": 0, 00:15:44.039 "crdt3": 0 00:15:44.039 } 00:15:44.039 }, 00:15:44.039 { 00:15:44.039 "method": "nvmf_create_transport", 00:15:44.039 "params": { 00:15:44.039 "abort_timeout_sec": 1, 00:15:44.039 "ack_timeout": 0, 00:15:44.039 "buf_cache_size": 4294967295, 00:15:44.039 "c2h_success": false, 00:15:44.039 "data_wr_pool_size": 0, 00:15:44.039 "dif_insert_or_strip": false, 00:15:44.039 "in_capsule_data_size": 4096, 00:15:44.039 "io_unit_size": 131072, 00:15:44.039 "max_aq_depth": 128, 00:15:44.039 "max_io_qpairs_per_ctrlr": 127, 00:15:44.039 "max_io_size": 131072, 00:15:44.039 "max_queue_depth": 128, 00:15:44.039 "num_shared_buffers": 511, 00:15:44.039 "sock_priority": 0, 00:15:44.039 "trtype": "TCP", 00:15:44.039 "zcopy": false 00:15:44.039 } 00:15:44.039 }, 00:15:44.039 { 00:15:44.039 "method": "nvmf_create_subsystem", 00:15:44.039 "params": { 00:15:44.039 "allow_any_host": false, 00:15:44.039 "ana_reporting": false, 00:15:44.039 "max_cntlid": 65519, 00:15:44.039 "max_namespaces": 32, 00:15:44.039 "min_cntlid": 1, 00:15:44.039 "model_number": "SPDK bdev Controller", 00:15:44.039 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:44.039 "serial_number": "00000000000000000000" 00:15:44.039 } 00:15:44.039 }, 00:15:44.039 { 00:15:44.039 "method": "nvmf_subsystem_add_host", 00:15:44.039 "params": { 00:15:44.039 "host": "nqn.2016-06.io.spdk:host1", 00:15:44.039 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:44.039 "psk": "key0" 00:15:44.039 } 00:15:44.039 }, 00:15:44.039 { 00:15:44.039 "method": "nvmf_subsystem_add_ns", 00:15:44.039 "params": { 00:15:44.039 "namespace": { 00:15:44.039 "bdev_name": "malloc0", 00:15:44.039 "nguid": "896AF818CDE64F25A778E933A34BE7B4", 00:15:44.039 "no_auto_visible": false, 00:15:44.039 "nsid": 1, 00:15:44.039 "uuid": "896af818-cde6-4f25-a778-e933a34be7b4" 00:15:44.039 }, 00:15:44.039 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:44.039 } 00:15:44.039 }, 00:15:44.039 { 00:15:44.039 "method": "nvmf_subsystem_add_listener", 00:15:44.039 "params": { 00:15:44.039 "listen_address": { 00:15:44.039 "adrfam": "IPv4", 00:15:44.039 "traddr": "10.0.0.2", 00:15:44.039 "trsvcid": "4420", 00:15:44.039 "trtype": "TCP" 00:15:44.039 }, 00:15:44.039 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:44.039 "secure_channel": false, 00:15:44.039 "sock_impl": "ssl" 00:15:44.039 } 00:15:44.039 } 00:15:44.039 ] 00:15:44.039 } 00:15:44.039 ] 00:15:44.039 }' 00:15:44.039 16:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:44.039 16:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:44.039 16:31:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85349 00:15:44.039 16:31:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:44.039 16:31:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85349 00:15:44.039 16:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85349 ']' 00:15:44.039 16:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.039 16:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:44.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.039 16:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.039 16:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:44.039 16:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:44.039 [2024-07-21 16:31:02.088334] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:15:44.039 [2024-07-21 16:31:02.088439] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.039 [2024-07-21 16:31:02.219445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.299 [2024-07-21 16:31:02.305845] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:44.299 [2024-07-21 16:31:02.305926] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:44.299 [2024-07-21 16:31:02.305938] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:44.299 [2024-07-21 16:31:02.305946] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:44.299 [2024-07-21 16:31:02.305954] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:44.299 [2024-07-21 16:31:02.306043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.558 [2024-07-21 16:31:02.570999] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:44.558 [2024-07-21 16:31:02.602949] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:44.558 [2024-07-21 16:31:02.603188] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:45.127 16:31:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:45.127 16:31:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:45.127 16:31:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:45.127 16:31:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:45.127 16:31:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:45.127 16:31:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:45.127 16:31:03 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=85389 00:15:45.127 16:31:03 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 85389 /var/tmp/bdevperf.sock 00:15:45.127 16:31:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85389 ']' 00:15:45.127 16:31:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:45.127 16:31:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:45.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:45.127 16:31:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:45.127 16:31:03 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:45.127 16:31:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:45.127 16:31:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:45.127 16:31:03 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:15:45.127 "subsystems": [ 00:15:45.127 { 00:15:45.127 "subsystem": "keyring", 00:15:45.127 "config": [ 00:15:45.127 { 00:15:45.127 "method": "keyring_file_add_key", 00:15:45.127 "params": { 00:15:45.127 "name": "key0", 00:15:45.127 "path": "/tmp/tmp.kTbHpq3SyG" 00:15:45.127 } 00:15:45.127 } 00:15:45.127 ] 00:15:45.127 }, 00:15:45.127 { 00:15:45.127 "subsystem": "iobuf", 00:15:45.127 "config": [ 00:15:45.127 { 00:15:45.127 "method": "iobuf_set_options", 00:15:45.127 "params": { 00:15:45.127 "large_bufsize": 135168, 00:15:45.127 "large_pool_count": 1024, 00:15:45.127 "small_bufsize": 8192, 00:15:45.127 "small_pool_count": 8192 00:15:45.127 } 00:15:45.127 } 00:15:45.127 ] 00:15:45.127 }, 00:15:45.127 { 00:15:45.127 "subsystem": "sock", 00:15:45.127 "config": [ 00:15:45.127 { 00:15:45.127 "method": "sock_set_default_impl", 00:15:45.127 "params": { 00:15:45.127 "impl_name": "posix" 00:15:45.127 } 00:15:45.127 }, 00:15:45.127 { 00:15:45.127 "method": "sock_impl_set_options", 00:15:45.127 "params": { 00:15:45.127 "enable_ktls": false, 00:15:45.127 "enable_placement_id": 0, 00:15:45.127 "enable_quickack": false, 00:15:45.127 "enable_recv_pipe": true, 00:15:45.127 "enable_zerocopy_send_client": false, 00:15:45.127 "enable_zerocopy_send_server": true, 00:15:45.127 "impl_name": "ssl", 00:15:45.127 "recv_buf_size": 4096, 00:15:45.127 "send_buf_size": 4096, 00:15:45.127 "tls_version": 0, 00:15:45.127 "zerocopy_threshold": 0 00:15:45.127 } 00:15:45.127 }, 00:15:45.127 { 00:15:45.127 "method": "sock_impl_set_options", 00:15:45.127 "params": { 00:15:45.127 "enable_ktls": false, 00:15:45.127 "enable_placement_id": 0, 00:15:45.127 "enable_quickack": false, 00:15:45.127 "enable_recv_pipe": true, 00:15:45.127 "enable_zerocopy_send_client": false, 00:15:45.127 "enable_zerocopy_send_server": true, 00:15:45.127 "impl_name": "posix", 00:15:45.127 "recv_buf_size": 2097152, 00:15:45.127 "send_buf_size": 2097152, 00:15:45.127 "tls_version": 0, 00:15:45.127 "zerocopy_threshold": 0 00:15:45.127 } 00:15:45.127 } 00:15:45.127 ] 00:15:45.127 }, 00:15:45.127 { 00:15:45.127 "subsystem": "vmd", 00:15:45.127 "config": [] 00:15:45.127 }, 00:15:45.127 { 00:15:45.127 "subsystem": "accel", 00:15:45.127 "config": [ 00:15:45.127 { 00:15:45.127 "method": "accel_set_options", 00:15:45.127 "params": { 00:15:45.127 "buf_count": 2048, 00:15:45.127 "large_cache_size": 16, 00:15:45.127 "sequence_count": 2048, 00:15:45.127 "small_cache_size": 128, 00:15:45.127 "task_count": 2048 00:15:45.127 } 00:15:45.127 } 00:15:45.127 ] 00:15:45.127 }, 00:15:45.127 { 00:15:45.127 "subsystem": "bdev", 00:15:45.127 "config": [ 00:15:45.127 { 00:15:45.127 "method": "bdev_set_options", 00:15:45.127 "params": { 00:15:45.127 "bdev_auto_examine": true, 00:15:45.127 "bdev_io_cache_size": 256, 00:15:45.127 "bdev_io_pool_size": 65535, 00:15:45.127 "iobuf_large_cache_size": 16, 00:15:45.127 "iobuf_small_cache_size": 128 00:15:45.127 } 00:15:45.127 }, 00:15:45.127 { 00:15:45.127 "method": "bdev_raid_set_options", 00:15:45.127 "params": { 00:15:45.127 "process_max_bandwidth_mb_sec": 0, 00:15:45.127 "process_window_size_kb": 1024 00:15:45.127 } 00:15:45.127 }, 00:15:45.127 { 00:15:45.127 "method": "bdev_iscsi_set_options", 00:15:45.127 "params": { 00:15:45.127 "timeout_sec": 30 00:15:45.127 } 00:15:45.127 }, 00:15:45.127 { 00:15:45.127 "method": "bdev_nvme_set_options", 00:15:45.127 "params": { 00:15:45.127 "action_on_timeout": "none", 00:15:45.127 "allow_accel_sequence": false, 00:15:45.127 "arbitration_burst": 0, 00:15:45.127 "bdev_retry_count": 3, 00:15:45.127 "ctrlr_loss_timeout_sec": 0, 00:15:45.127 "delay_cmd_submit": true, 00:15:45.127 "dhchap_dhgroups": [ 00:15:45.127 "null", 00:15:45.127 "ffdhe2048", 00:15:45.127 "ffdhe3072", 00:15:45.127 "ffdhe4096", 00:15:45.127 "ffdhe6144", 00:15:45.127 "ffdhe8192" 00:15:45.127 ], 00:15:45.127 "dhchap_digests": [ 00:15:45.127 "sha256", 00:15:45.127 "sha384", 00:15:45.127 "sha512" 00:15:45.127 ], 00:15:45.127 "disable_auto_failback": false, 00:15:45.127 "fast_io_fail_timeout_sec": 0, 00:15:45.127 "generate_uuids": false, 00:15:45.127 "high_priority_weight": 0, 00:15:45.127 "io_path_stat": false, 00:15:45.127 "io_queue_requests": 512, 00:15:45.127 "keep_alive_timeout_ms": 10000, 00:15:45.127 "low_priority_weight": 0, 00:15:45.127 "medium_priority_weight": 0, 00:15:45.127 "nvme_adminq_poll_period_us": 10000, 00:15:45.127 "nvme_error_stat": false, 00:15:45.127 "nvme_ioq_poll_period_us": 0, 00:15:45.127 "rdma_cm_event_timeout_ms": 0, 00:15:45.127 "rdma_max_cq_size": 0, 00:15:45.127 "rdma_srq_size": 0, 00:15:45.127 "reconnect_delay_sec": 0, 00:15:45.127 "timeout_admin_us": 0, 00:15:45.127 "timeout_us": 0, 00:15:45.127 "transport_ack_timeout": 0, 00:15:45.127 "transport_retry_count": 4, 00:15:45.127 "transport_tos": 0 00:15:45.127 } 00:15:45.127 }, 00:15:45.127 { 00:15:45.127 "method": "bdev_nvme_attach_controller", 00:15:45.127 "params": { 00:15:45.127 "adrfam": "IPv4", 00:15:45.127 "ctrlr_loss_timeout_sec": 0, 00:15:45.127 "ddgst": false, 00:15:45.127 "fast_io_fail_timeout_sec": 0, 00:15:45.127 "hdgst": false, 00:15:45.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:45.127 "name": "nvme0", 00:15:45.128 "prchk_guard": false, 00:15:45.128 "prchk_reftag": false, 00:15:45.128 "psk": "key0", 00:15:45.128 "reconnect_delay_sec": 0, 00:15:45.128 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:45.128 "traddr": "10.0.0.2", 00:15:45.128 "trsvcid": "4420", 00:15:45.128 "trtype": "TCP" 00:15:45.128 } 00:15:45.128 }, 00:15:45.128 { 00:15:45.128 "method": "bdev_nvme_set_hotplug", 00:15:45.128 "params": { 00:15:45.128 "enable": false, 00:15:45.128 "period_us": 100000 00:15:45.128 } 00:15:45.128 }, 00:15:45.128 { 00:15:45.128 "method": "bdev_enable_histogram", 00:15:45.128 "params": { 00:15:45.128 "enable": true, 00:15:45.128 "name": "nvme0n1" 00:15:45.128 } 00:15:45.128 }, 00:15:45.128 { 00:15:45.128 "method": "bdev_wait_for_examine" 00:15:45.128 } 00:15:45.128 ] 00:15:45.128 }, 00:15:45.128 { 00:15:45.128 "subsystem": "nbd", 00:15:45.128 "config": [] 00:15:45.128 } 00:15:45.128 ] 00:15:45.128 }' 00:15:45.128 [2024-07-21 16:31:03.128877] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:15:45.128 [2024-07-21 16:31:03.128974] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85389 ] 00:15:45.128 [2024-07-21 16:31:03.266685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.386 [2024-07-21 16:31:03.352813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:45.386 [2024-07-21 16:31:03.549956] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:45.963 16:31:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:45.963 16:31:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:45.963 16:31:03 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:45.963 16:31:03 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:15:46.227 16:31:04 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.227 16:31:04 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:46.227 Running I/O for 1 seconds... 00:15:47.162 00:15:47.162 Latency(us) 00:15:47.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.162 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:47.162 Verification LBA range: start 0x0 length 0x2000 00:15:47.162 nvme0n1 : 1.03 3981.77 15.55 0.00 0.00 31805.22 5987.61 19660.80 00:15:47.162 =================================================================================================================== 00:15:47.162 Total : 3981.77 15.55 0.00 0.00 31805.22 5987.61 19660.80 00:15:47.162 0 00:15:47.421 16:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:15:47.421 16:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:15:47.421 16:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:47.421 16:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:15:47.421 16:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:15:47.421 16:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:47.421 16:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:47.421 16:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:47.421 16:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:47.421 16:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:47.421 16:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:47.421 nvmf_trace.0 00:15:47.421 16:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:15:47.421 16:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 85389 00:15:47.421 16:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85389 ']' 00:15:47.421 16:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85389 00:15:47.421 16:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:47.421 16:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:47.421 16:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85389 00:15:47.421 16:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:47.421 16:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:47.421 16:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85389' 00:15:47.421 killing process with pid 85389 00:15:47.421 Received shutdown signal, test time was about 1.000000 seconds 00:15:47.421 00:15:47.421 Latency(us) 00:15:47.421 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.421 =================================================================================================================== 00:15:47.421 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:47.421 16:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85389 00:15:47.421 16:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85389 00:15:47.679 16:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:47.679 16:31:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:47.679 16:31:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:15:47.679 16:31:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:47.679 16:31:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:15:47.679 16:31:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:47.679 16:31:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:47.679 rmmod nvme_tcp 00:15:47.679 rmmod nvme_fabrics 00:15:47.679 rmmod nvme_keyring 00:15:47.938 16:31:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:47.938 16:31:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:15:47.938 16:31:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:15:47.938 16:31:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 85349 ']' 00:15:47.938 16:31:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 85349 00:15:47.938 16:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85349 ']' 00:15:47.938 16:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85349 00:15:47.938 16:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:47.938 16:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:47.938 16:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85349 00:15:47.938 16:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:47.938 16:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:47.938 killing process with pid 85349 00:15:47.938 16:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85349' 00:15:47.938 16:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85349 00:15:47.938 16:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85349 00:15:48.197 16:31:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:48.197 16:31:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:48.197 16:31:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:48.197 16:31:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:48.197 16:31:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:48.197 16:31:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.197 16:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.197 16:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.197 16:31:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:48.197 16:31:06 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.eB2xTuI8Jf /tmp/tmp.x4tJTxNo6E /tmp/tmp.kTbHpq3SyG 00:15:48.197 00:15:48.197 real 1m25.444s 00:15:48.197 user 2m12.341s 00:15:48.197 sys 0m29.152s 00:15:48.197 16:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:48.197 16:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:48.197 ************************************ 00:15:48.197 END TEST nvmf_tls 00:15:48.197 ************************************ 00:15:48.197 16:31:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:48.197 16:31:06 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:48.197 16:31:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:48.197 16:31:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:48.197 16:31:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:48.197 ************************************ 00:15:48.197 START TEST nvmf_fips 00:15:48.197 ************************************ 00:15:48.197 16:31:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:48.197 * Looking for test storage... 00:15:48.197 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:48.197 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:48.197 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:48.197 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.197 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.197 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.197 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.197 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.197 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.197 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.197 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.197 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.197 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.197 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:15:48.197 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:15:48.197 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.197 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.197 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:48.197 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.197 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:48.197 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.197 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.197 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.197 16:31:06 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.197 16:31:06 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.197 16:31:06 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.197 16:31:06 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:48.198 16:31:06 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.198 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:15:48.198 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:48.198 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:48.198 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.198 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.198 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.198 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:48.198 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:48.198 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:48.198 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:48.198 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:15:48.198 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:15:48.198 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:15:48.198 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:15:48.456 Error setting digest 00:15:48.456 0072FADEA27F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:15:48.456 0072FADEA27F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:48.456 16:31:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:48.457 Cannot find device "nvmf_tgt_br" 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:48.457 Cannot find device "nvmf_tgt_br2" 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:48.457 Cannot find device "nvmf_tgt_br" 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:48.457 Cannot find device "nvmf_tgt_br2" 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:15:48.457 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:48.715 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:48.715 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:48.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:48.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:15:48.715 00:15:48.715 --- 10.0.0.2 ping statistics --- 00:15:48.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.715 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:48.715 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:48.715 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:15:48.715 00:15:48.715 --- 10.0.0.3 ping statistics --- 00:15:48.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.715 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:48.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:48.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:48.715 00:15:48.715 --- 10.0.0.1 ping statistics --- 00:15:48.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.715 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:48.715 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:48.716 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:48.716 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:48.716 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:48.716 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:48.716 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:48.973 16:31:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:15:48.973 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:48.973 16:31:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:48.973 16:31:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:48.973 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=85681 00:15:48.973 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 85681 00:15:48.973 16:31:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 85681 ']' 00:15:48.973 16:31:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.973 16:31:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:48.974 16:31:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:48.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.974 16:31:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.974 16:31:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:48.974 16:31:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:48.974 [2024-07-21 16:31:07.035695] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:15:48.974 [2024-07-21 16:31:07.035788] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.974 [2024-07-21 16:31:07.177896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.231 [2024-07-21 16:31:07.290980] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.231 [2024-07-21 16:31:07.291058] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.231 [2024-07-21 16:31:07.291072] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.231 [2024-07-21 16:31:07.291084] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.231 [2024-07-21 16:31:07.291093] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.231 [2024-07-21 16:31:07.291126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.797 16:31:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:49.797 16:31:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:15:49.797 16:31:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:49.797 16:31:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:49.797 16:31:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:50.055 16:31:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:50.055 16:31:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:15:50.055 16:31:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:50.055 16:31:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:50.055 16:31:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:50.055 16:31:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:50.055 16:31:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:50.055 16:31:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:50.055 16:31:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:50.313 [2024-07-21 16:31:08.303429] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:50.313 [2024-07-21 16:31:08.319382] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:50.313 [2024-07-21 16:31:08.319576] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:50.313 [2024-07-21 16:31:08.353129] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:50.313 malloc0 00:15:50.313 16:31:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:50.313 16:31:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=85733 00:15:50.313 16:31:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 85733 /var/tmp/bdevperf.sock 00:15:50.313 16:31:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 85733 ']' 00:15:50.313 16:31:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:50.313 16:31:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:50.313 16:31:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:50.313 16:31:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:50.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:50.313 16:31:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:50.313 16:31:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:50.313 [2024-07-21 16:31:08.467723] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:15:50.313 [2024-07-21 16:31:08.467830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85733 ] 00:15:50.570 [2024-07-21 16:31:08.607069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.570 [2024-07-21 16:31:08.718098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.505 16:31:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:51.505 16:31:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:15:51.505 16:31:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:51.505 [2024-07-21 16:31:09.650937] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:51.505 [2024-07-21 16:31:09.651072] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:51.763 TLSTESTn1 00:15:51.763 16:31:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:51.763 Running I/O for 10 seconds... 00:16:01.729 00:16:01.729 Latency(us) 00:16:01.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.729 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:01.729 Verification LBA range: start 0x0 length 0x2000 00:16:01.729 TLSTESTn1 : 10.01 4794.32 18.73 0.00 0.00 26650.78 6255.71 24546.21 00:16:01.729 =================================================================================================================== 00:16:01.729 Total : 4794.32 18.73 0.00 0.00 26650.78 6255.71 24546.21 00:16:01.729 0 00:16:01.729 16:31:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:16:01.729 16:31:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:16:01.729 16:31:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:16:01.729 16:31:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:16:01.729 16:31:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:16:01.729 16:31:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:01.729 16:31:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:16:01.729 16:31:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:16:01.729 16:31:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:16:01.729 16:31:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:01.729 nvmf_trace.0 00:16:01.987 16:31:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:16:01.987 16:31:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 85733 00:16:01.987 16:31:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 85733 ']' 00:16:01.987 16:31:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 85733 00:16:01.987 16:31:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:16:01.987 16:31:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:01.987 16:31:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85733 00:16:01.987 16:31:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:01.987 16:31:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:01.987 killing process with pid 85733 00:16:01.987 16:31:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85733' 00:16:01.987 16:31:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 85733 00:16:01.987 Received shutdown signal, test time was about 10.000000 seconds 00:16:01.987 00:16:01.987 Latency(us) 00:16:01.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.987 =================================================================================================================== 00:16:01.987 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:01.987 [2024-07-21 16:31:19.990140] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:01.987 16:31:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 85733 00:16:02.245 16:31:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:16:02.245 16:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:02.245 16:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:16:02.245 16:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:02.245 16:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:16:02.245 16:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:02.245 16:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:02.245 rmmod nvme_tcp 00:16:02.245 rmmod nvme_fabrics 00:16:02.245 rmmod nvme_keyring 00:16:02.245 16:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:02.245 16:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:16:02.245 16:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:16:02.245 16:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 85681 ']' 00:16:02.245 16:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 85681 00:16:02.245 16:31:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 85681 ']' 00:16:02.245 16:31:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 85681 00:16:02.245 16:31:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:16:02.245 16:31:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:02.245 16:31:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85681 00:16:02.245 16:31:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:02.245 16:31:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:02.245 killing process with pid 85681 00:16:02.245 16:31:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85681' 00:16:02.245 16:31:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 85681 00:16:02.245 [2024-07-21 16:31:20.380720] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:02.245 16:31:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 85681 00:16:02.517 16:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:02.518 16:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:02.518 16:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:02.518 16:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:02.518 16:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:02.518 16:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.518 16:31:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:02.518 16:31:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.518 16:31:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:02.518 16:31:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:02.518 ************************************ 00:16:02.518 END TEST nvmf_fips 00:16:02.518 ************************************ 00:16:02.518 00:16:02.518 real 0m14.415s 00:16:02.518 user 0m19.471s 00:16:02.518 sys 0m5.861s 00:16:02.518 16:31:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:02.518 16:31:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:02.776 16:31:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:02.776 16:31:20 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:16:02.776 16:31:20 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:16:02.776 16:31:20 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:16:02.776 16:31:20 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:02.776 16:31:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:02.776 16:31:20 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:16:02.776 16:31:20 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:02.776 16:31:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:02.776 16:31:20 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:16:02.776 16:31:20 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:02.776 16:31:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:02.776 16:31:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:02.776 16:31:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:02.776 ************************************ 00:16:02.776 START TEST nvmf_multicontroller 00:16:02.776 ************************************ 00:16:02.776 16:31:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:02.776 * Looking for test storage... 00:16:02.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:02.776 16:31:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:02.776 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:02.777 Cannot find device "nvmf_tgt_br" 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:02.777 Cannot find device "nvmf_tgt_br2" 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:02.777 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:03.036 Cannot find device "nvmf_tgt_br" 00:16:03.036 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:16:03.036 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:03.036 Cannot find device "nvmf_tgt_br2" 00:16:03.036 16:31:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:16:03.036 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:03.036 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:03.036 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:03.036 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:03.036 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:16:03.036 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:03.036 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:03.036 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:16:03.036 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:03.036 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:03.036 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:03.036 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:03.036 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:03.036 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:03.036 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:03.036 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:03.036 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:03.036 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:03.036 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:03.036 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:03.036 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:03.036 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:03.036 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:03.036 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:03.036 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:03.036 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:03.036 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:03.036 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:03.036 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:03.036 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:03.294 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:03.294 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:03.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:03.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:16:03.295 00:16:03.295 --- 10.0.0.2 ping statistics --- 00:16:03.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.295 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:16:03.295 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:03.295 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:03.295 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:16:03.295 00:16:03.295 --- 10.0.0.3 ping statistics --- 00:16:03.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.295 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:03.295 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:03.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:03.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:16:03.295 00:16:03.295 --- 10.0.0.1 ping statistics --- 00:16:03.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.295 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:03.295 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:03.295 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:16:03.295 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:03.295 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:03.295 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:03.295 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:03.295 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:03.295 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:03.295 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:03.295 16:31:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:16:03.295 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:03.295 16:31:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:03.295 16:31:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:03.295 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=86098 00:16:03.295 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 86098 00:16:03.295 16:31:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:03.295 16:31:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 86098 ']' 00:16:03.295 16:31:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.295 16:31:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:03.295 16:31:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.295 16:31:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:03.295 16:31:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:03.295 [2024-07-21 16:31:21.342876] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:16:03.295 [2024-07-21 16:31:21.342968] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.295 [2024-07-21 16:31:21.483354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:03.553 [2024-07-21 16:31:21.578765] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.553 [2024-07-21 16:31:21.578820] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.553 [2024-07-21 16:31:21.578830] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:03.553 [2024-07-21 16:31:21.578837] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:03.553 [2024-07-21 16:31:21.578843] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.553 [2024-07-21 16:31:21.578991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.553 [2024-07-21 16:31:21.579114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:03.553 [2024-07-21 16:31:21.579692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:04.488 [2024-07-21 16:31:22.392739] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:04.488 Malloc0 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:04.488 [2024-07-21 16:31:22.464226] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:04.488 [2024-07-21 16:31:22.472102] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:04.488 Malloc1 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=86151 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:16:04.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 86151 /var/tmp/bdevperf.sock 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 86151 ']' 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:04.488 16:31:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:05.423 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:05.423 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:16:05.423 16:31:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:16:05.423 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.423 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:05.423 NVMe0n1 00:16:05.423 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.423 16:31:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:16:05.423 16:31:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:05.423 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.423 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:05.423 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.423 1 00:16:05.423 16:31:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:05.423 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:05.423 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:05.423 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:05.681 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:05.681 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:05.681 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:05.681 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:05.681 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.681 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:05.681 2024/07/21 16:31:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:05.681 request: 00:16:05.681 { 00:16:05.681 "method": "bdev_nvme_attach_controller", 00:16:05.681 "params": { 00:16:05.681 "name": "NVMe0", 00:16:05.681 "trtype": "tcp", 00:16:05.681 "traddr": "10.0.0.2", 00:16:05.681 "adrfam": "ipv4", 00:16:05.681 "trsvcid": "4420", 00:16:05.681 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:05.681 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:16:05.681 "hostaddr": "10.0.0.2", 00:16:05.681 "hostsvcid": "60000", 00:16:05.681 "prchk_reftag": false, 00:16:05.681 "prchk_guard": false, 00:16:05.681 "hdgst": false, 00:16:05.681 "ddgst": false 00:16:05.681 } 00:16:05.681 } 00:16:05.681 Got JSON-RPC error response 00:16:05.681 GoRPCClient: error on JSON-RPC call 00:16:05.681 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:05.681 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:05.681 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:05.681 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:05.681 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:05.681 16:31:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:05.681 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:05.681 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:05.681 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:05.681 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:05.682 2024/07/21 16:31:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:05.682 request: 00:16:05.682 { 00:16:05.682 "method": "bdev_nvme_attach_controller", 00:16:05.682 "params": { 00:16:05.682 "name": "NVMe0", 00:16:05.682 "trtype": "tcp", 00:16:05.682 "traddr": "10.0.0.2", 00:16:05.682 "adrfam": "ipv4", 00:16:05.682 "trsvcid": "4420", 00:16:05.682 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:05.682 "hostaddr": "10.0.0.2", 00:16:05.682 "hostsvcid": "60000", 00:16:05.682 "prchk_reftag": false, 00:16:05.682 "prchk_guard": false, 00:16:05.682 "hdgst": false, 00:16:05.682 "ddgst": false 00:16:05.682 } 00:16:05.682 } 00:16:05.682 Got JSON-RPC error response 00:16:05.682 GoRPCClient: error on JSON-RPC call 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:05.682 2024/07/21 16:31:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:16:05.682 request: 00:16:05.682 { 00:16:05.682 "method": "bdev_nvme_attach_controller", 00:16:05.682 "params": { 00:16:05.682 "name": "NVMe0", 00:16:05.682 "trtype": "tcp", 00:16:05.682 "traddr": "10.0.0.2", 00:16:05.682 "adrfam": "ipv4", 00:16:05.682 "trsvcid": "4420", 00:16:05.682 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:05.682 "hostaddr": "10.0.0.2", 00:16:05.682 "hostsvcid": "60000", 00:16:05.682 "prchk_reftag": false, 00:16:05.682 "prchk_guard": false, 00:16:05.682 "hdgst": false, 00:16:05.682 "ddgst": false, 00:16:05.682 "multipath": "disable" 00:16:05.682 } 00:16:05.682 } 00:16:05.682 Got JSON-RPC error response 00:16:05.682 GoRPCClient: error on JSON-RPC call 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:05.682 2024/07/21 16:31:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:05.682 request: 00:16:05.682 { 00:16:05.682 "method": "bdev_nvme_attach_controller", 00:16:05.682 "params": { 00:16:05.682 "name": "NVMe0", 00:16:05.682 "trtype": "tcp", 00:16:05.682 "traddr": "10.0.0.2", 00:16:05.682 "adrfam": "ipv4", 00:16:05.682 "trsvcid": "4420", 00:16:05.682 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:05.682 "hostaddr": "10.0.0.2", 00:16:05.682 "hostsvcid": "60000", 00:16:05.682 "prchk_reftag": false, 00:16:05.682 "prchk_guard": false, 00:16:05.682 "hdgst": false, 00:16:05.682 "ddgst": false, 00:16:05.682 "multipath": "failover" 00:16:05.682 } 00:16:05.682 } 00:16:05.682 Got JSON-RPC error response 00:16:05.682 GoRPCClient: error on JSON-RPC call 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:05.682 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:05.682 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:16:05.682 16:31:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:07.054 0 00:16:07.054 16:31:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:16:07.054 16:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.054 16:31:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:07.054 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.054 16:31:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 86151 00:16:07.054 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 86151 ']' 00:16:07.054 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 86151 00:16:07.054 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:16:07.054 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:07.054 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86151 00:16:07.054 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:07.054 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:07.054 killing process with pid 86151 00:16:07.054 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86151' 00:16:07.054 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 86151 00:16:07.054 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 86151 00:16:07.312 16:31:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:07.312 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.312 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:07.312 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.312 16:31:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:07.312 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.312 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:07.312 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.312 16:31:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:16:07.312 16:31:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:07.312 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:16:07.312 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:16:07.312 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:16:07.312 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:16:07.312 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:16:07.312 [2024-07-21 16:31:22.603446] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:16:07.312 [2024-07-21 16:31:22.603587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86151 ] 00:16:07.312 [2024-07-21 16:31:22.738107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.312 [2024-07-21 16:31:22.843600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.312 [2024-07-21 16:31:23.828174] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name b01a2773-48b5-4698-b74c-33329396cc8c already exists 00:16:07.312 [2024-07-21 16:31:23.828224] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:b01a2773-48b5-4698-b74c-33329396cc8c alias for bdev NVMe1n1 00:16:07.312 [2024-07-21 16:31:23.828256] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:16:07.312 Running I/O for 1 seconds... 00:16:07.312 00:16:07.312 Latency(us) 00:16:07.312 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.312 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:16:07.312 NVMe0n1 : 1.00 22299.11 87.11 0.00 0.00 5732.01 3366.17 10545.34 00:16:07.312 =================================================================================================================== 00:16:07.312 Total : 22299.11 87.11 0.00 0.00 5732.01 3366.17 10545.34 00:16:07.312 Received shutdown signal, test time was about 1.000000 seconds 00:16:07.312 00:16:07.312 Latency(us) 00:16:07.312 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.312 =================================================================================================================== 00:16:07.312 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:07.312 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:16:07.312 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:07.312 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:16:07.313 16:31:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:16:07.313 16:31:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:07.313 16:31:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:16:07.313 16:31:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:07.313 16:31:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:16:07.313 16:31:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:07.313 16:31:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:07.313 rmmod nvme_tcp 00:16:07.313 rmmod nvme_fabrics 00:16:07.313 rmmod nvme_keyring 00:16:07.313 16:31:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:07.313 16:31:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:16:07.313 16:31:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:16:07.313 16:31:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 86098 ']' 00:16:07.313 16:31:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 86098 00:16:07.313 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 86098 ']' 00:16:07.313 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 86098 00:16:07.313 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:16:07.313 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:07.313 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86098 00:16:07.313 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:07.313 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:07.313 killing process with pid 86098 00:16:07.313 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86098' 00:16:07.313 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 86098 00:16:07.313 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 86098 00:16:07.879 16:31:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:07.879 16:31:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:07.879 16:31:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:07.879 16:31:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:07.879 16:31:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:07.879 16:31:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.879 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.879 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.879 16:31:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:07.879 00:16:07.879 real 0m5.059s 00:16:07.879 user 0m15.779s 00:16:07.879 sys 0m1.110s 00:16:07.879 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:07.879 16:31:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:07.879 ************************************ 00:16:07.879 END TEST nvmf_multicontroller 00:16:07.879 ************************************ 00:16:07.879 16:31:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:07.879 16:31:25 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:07.879 16:31:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:07.879 16:31:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:07.879 16:31:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:07.879 ************************************ 00:16:07.879 START TEST nvmf_aer 00:16:07.879 ************************************ 00:16:07.879 16:31:25 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:07.879 * Looking for test storage... 00:16:07.879 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:07.879 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:07.880 Cannot find device "nvmf_tgt_br" 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # true 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:07.880 Cannot find device "nvmf_tgt_br2" 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # true 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:07.880 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:08.138 Cannot find device "nvmf_tgt_br" 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # true 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:08.138 Cannot find device "nvmf_tgt_br2" 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # true 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:08.138 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # true 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:08.138 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # true 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:08.138 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:08.396 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:08.396 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:08.396 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:08.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:08.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:16:08.396 00:16:08.396 --- 10.0.0.2 ping statistics --- 00:16:08.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.396 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:16:08.396 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:08.396 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:08.396 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:16:08.396 00:16:08.396 --- 10.0.0.3 ping statistics --- 00:16:08.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.396 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:08.396 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:08.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:08.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:16:08.396 00:16:08.396 --- 10.0.0.1 ping statistics --- 00:16:08.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.396 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:16:08.396 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:08.396 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:16:08.396 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:08.396 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:08.396 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:08.396 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:08.396 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:08.396 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:08.396 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:08.396 16:31:26 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:16:08.396 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:08.396 16:31:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:08.396 16:31:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:08.396 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=86404 00:16:08.396 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:08.396 16:31:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 86404 00:16:08.396 16:31:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 86404 ']' 00:16:08.396 16:31:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.396 16:31:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:08.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.396 16:31:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.396 16:31:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:08.396 16:31:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:08.396 [2024-07-21 16:31:26.452162] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:16:08.396 [2024-07-21 16:31:26.452247] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.396 [2024-07-21 16:31:26.584746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:08.654 [2024-07-21 16:31:26.671982] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:08.654 [2024-07-21 16:31:26.672031] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:08.654 [2024-07-21 16:31:26.672057] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:08.654 [2024-07-21 16:31:26.672065] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:08.654 [2024-07-21 16:31:26.672072] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:08.654 [2024-07-21 16:31:26.672226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.654 [2024-07-21 16:31:26.675324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:08.654 [2024-07-21 16:31:26.675459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:08.654 [2024-07-21 16:31:26.675603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.219 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:09.219 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:16:09.219 16:31:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:09.219 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:09.219 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:09.477 16:31:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.477 16:31:27 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:09.477 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.477 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:09.477 [2024-07-21 16:31:27.452475] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:09.477 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.477 16:31:27 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:16:09.477 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.477 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:09.477 Malloc0 00:16:09.477 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.477 16:31:27 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:16:09.477 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.477 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:09.477 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.477 16:31:27 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:09.477 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.477 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:09.477 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.477 16:31:27 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:09.477 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.477 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:09.477 [2024-07-21 16:31:27.519579] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:09.477 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.477 16:31:27 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:16:09.477 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.477 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:09.477 [ 00:16:09.477 { 00:16:09.477 "allow_any_host": true, 00:16:09.477 "hosts": [], 00:16:09.477 "listen_addresses": [], 00:16:09.477 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:09.477 "subtype": "Discovery" 00:16:09.477 }, 00:16:09.477 { 00:16:09.477 "allow_any_host": true, 00:16:09.477 "hosts": [], 00:16:09.477 "listen_addresses": [ 00:16:09.477 { 00:16:09.477 "adrfam": "IPv4", 00:16:09.477 "traddr": "10.0.0.2", 00:16:09.477 "trsvcid": "4420", 00:16:09.477 "trtype": "TCP" 00:16:09.477 } 00:16:09.477 ], 00:16:09.477 "max_cntlid": 65519, 00:16:09.477 "max_namespaces": 2, 00:16:09.477 "min_cntlid": 1, 00:16:09.477 "model_number": "SPDK bdev Controller", 00:16:09.477 "namespaces": [ 00:16:09.477 { 00:16:09.477 "bdev_name": "Malloc0", 00:16:09.477 "name": "Malloc0", 00:16:09.477 "nguid": "33A2236F841F454F817B9EF2ED12C76C", 00:16:09.477 "nsid": 1, 00:16:09.477 "uuid": "33a2236f-841f-454f-817b-9ef2ed12c76c" 00:16:09.477 } 00:16:09.477 ], 00:16:09.477 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:09.478 "serial_number": "SPDK00000000000001", 00:16:09.478 "subtype": "NVMe" 00:16:09.478 } 00:16:09.478 ] 00:16:09.478 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.478 16:31:27 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:09.478 16:31:27 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:16:09.478 16:31:27 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=86458 00:16:09.478 16:31:27 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:16:09.478 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:16:09.478 16:31:27 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:16:09.478 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:09.478 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:16:09.478 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:16:09.478 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:16:09.478 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:09.478 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:16:09.478 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:16:09.478 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:09.735 Malloc1 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:09.735 Asynchronous Event Request test 00:16:09.735 Attaching to 10.0.0.2 00:16:09.735 Attached to 10.0.0.2 00:16:09.735 Registering asynchronous event callbacks... 00:16:09.735 Starting namespace attribute notice tests for all controllers... 00:16:09.735 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:09.735 aer_cb - Changed Namespace 00:16:09.735 Cleaning up... 00:16:09.735 [ 00:16:09.735 { 00:16:09.735 "allow_any_host": true, 00:16:09.735 "hosts": [], 00:16:09.735 "listen_addresses": [], 00:16:09.735 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:09.735 "subtype": "Discovery" 00:16:09.735 }, 00:16:09.735 { 00:16:09.735 "allow_any_host": true, 00:16:09.735 "hosts": [], 00:16:09.735 "listen_addresses": [ 00:16:09.735 { 00:16:09.735 "adrfam": "IPv4", 00:16:09.735 "traddr": "10.0.0.2", 00:16:09.735 "trsvcid": "4420", 00:16:09.735 "trtype": "TCP" 00:16:09.735 } 00:16:09.735 ], 00:16:09.735 "max_cntlid": 65519, 00:16:09.735 "max_namespaces": 2, 00:16:09.735 "min_cntlid": 1, 00:16:09.735 "model_number": "SPDK bdev Controller", 00:16:09.735 "namespaces": [ 00:16:09.735 { 00:16:09.735 "bdev_name": "Malloc0", 00:16:09.735 "name": "Malloc0", 00:16:09.735 "nguid": "33A2236F841F454F817B9EF2ED12C76C", 00:16:09.735 "nsid": 1, 00:16:09.735 "uuid": "33a2236f-841f-454f-817b-9ef2ed12c76c" 00:16:09.735 }, 00:16:09.735 { 00:16:09.735 "bdev_name": "Malloc1", 00:16:09.735 "name": "Malloc1", 00:16:09.735 "nguid": "512FB588516B47888EAD79A6986268B7", 00:16:09.735 "nsid": 2, 00:16:09.735 "uuid": "512fb588-516b-4788-8ead-79a6986268b7" 00:16:09.735 } 00:16:09.735 ], 00:16:09.735 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:09.735 "serial_number": "SPDK00000000000001", 00:16:09.735 "subtype": "NVMe" 00:16:09.735 } 00:16:09.735 ] 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 86458 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:09.735 16:31:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:16:09.993 16:31:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:09.993 16:31:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:16:09.993 16:31:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:09.993 16:31:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:09.993 rmmod nvme_tcp 00:16:09.993 rmmod nvme_fabrics 00:16:09.993 rmmod nvme_keyring 00:16:09.993 16:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:09.993 16:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:16:09.993 16:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:16:09.993 16:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 86404 ']' 00:16:09.993 16:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 86404 00:16:09.993 16:31:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 86404 ']' 00:16:09.993 16:31:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 86404 00:16:09.993 16:31:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:16:09.993 16:31:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:09.993 16:31:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86404 00:16:09.993 killing process with pid 86404 00:16:09.993 16:31:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:09.993 16:31:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:09.993 16:31:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86404' 00:16:09.993 16:31:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 86404 00:16:09.993 16:31:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 86404 00:16:10.250 16:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:10.250 16:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:10.250 16:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:10.250 16:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:10.250 16:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:10.250 16:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.250 16:31:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:10.250 16:31:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:10.250 16:31:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:10.250 00:16:10.250 real 0m2.365s 00:16:10.250 user 0m6.479s 00:16:10.250 sys 0m0.638s 00:16:10.250 16:31:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:10.250 ************************************ 00:16:10.250 END TEST nvmf_aer 00:16:10.250 ************************************ 00:16:10.250 16:31:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:10.250 16:31:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:10.250 16:31:28 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:16:10.250 16:31:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:10.250 16:31:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:10.250 16:31:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:10.250 ************************************ 00:16:10.250 START TEST nvmf_async_init 00:16:10.250 ************************************ 00:16:10.250 16:31:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:16:10.250 * Looking for test storage... 00:16:10.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:16:10.251 16:31:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=89baa7e19f2b4d548f736e008862e7f7 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:10.509 Cannot find device "nvmf_tgt_br" 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:10.509 Cannot find device "nvmf_tgt_br2" 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:10.509 Cannot find device "nvmf_tgt_br" 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:10.509 Cannot find device "nvmf_tgt_br2" 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:10.509 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:10.509 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:10.509 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:10.767 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:10.767 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:10.767 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:10.767 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:10.767 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:10.767 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:10.767 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:10.767 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:10.767 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:10.767 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:10.767 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:10.767 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:10.767 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:10.767 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:10.767 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:10.767 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:10.767 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:10.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:10.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:16:10.767 00:16:10.767 --- 10.0.0.2 ping statistics --- 00:16:10.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.767 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:16:10.768 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:10.768 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:10.768 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:16:10.768 00:16:10.768 --- 10.0.0.3 ping statistics --- 00:16:10.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.768 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:10.768 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:10.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:10.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:16:10.768 00:16:10.768 --- 10.0.0.1 ping statistics --- 00:16:10.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.768 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:16:10.768 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:10.768 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:16:10.768 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:10.768 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:10.768 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:10.768 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:10.768 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:10.768 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:10.768 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:10.768 16:31:28 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:16:10.768 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:10.768 16:31:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:10.768 16:31:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:10.768 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:10.768 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=86628 00:16:10.768 16:31:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 86628 00:16:10.768 16:31:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 86628 ']' 00:16:10.768 16:31:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.768 16:31:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:10.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.768 16:31:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.768 16:31:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:10.768 16:31:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:10.768 [2024-07-21 16:31:28.942240] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:16:10.768 [2024-07-21 16:31:28.942353] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.025 [2024-07-21 16:31:29.081048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.025 [2024-07-21 16:31:29.174652] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:11.025 [2024-07-21 16:31:29.174733] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:11.025 [2024-07-21 16:31:29.174744] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:11.025 [2024-07-21 16:31:29.174751] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:11.025 [2024-07-21 16:31:29.174757] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:11.025 [2024-07-21 16:31:29.174781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.968 16:31:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:11.968 16:31:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:16:11.968 16:31:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:11.968 16:31:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:11.968 16:31:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:11.968 16:31:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:11.968 16:31:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:16:11.968 16:31:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.968 16:31:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:11.968 [2024-07-21 16:31:29.953835] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:11.968 16:31:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.969 16:31:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:16:11.969 16:31:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.969 16:31:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:11.969 null0 00:16:11.969 16:31:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.969 16:31:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:16:11.969 16:31:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.969 16:31:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:11.969 16:31:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.969 16:31:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:16:11.969 16:31:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.969 16:31:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:11.969 16:31:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.969 16:31:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 89baa7e19f2b4d548f736e008862e7f7 00:16:11.969 16:31:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.969 16:31:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:11.969 16:31:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.969 16:31:29 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:11.969 16:31:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.969 16:31:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:11.969 [2024-07-21 16:31:29.997958] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:11.969 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.969 16:31:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:16:11.969 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.969 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:12.226 nvme0n1 00:16:12.226 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.226 16:31:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:12.226 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.226 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:12.226 [ 00:16:12.226 { 00:16:12.226 "aliases": [ 00:16:12.226 "89baa7e1-9f2b-4d54-8f73-6e008862e7f7" 00:16:12.226 ], 00:16:12.226 "assigned_rate_limits": { 00:16:12.226 "r_mbytes_per_sec": 0, 00:16:12.226 "rw_ios_per_sec": 0, 00:16:12.226 "rw_mbytes_per_sec": 0, 00:16:12.226 "w_mbytes_per_sec": 0 00:16:12.226 }, 00:16:12.226 "block_size": 512, 00:16:12.226 "claimed": false, 00:16:12.226 "driver_specific": { 00:16:12.226 "mp_policy": "active_passive", 00:16:12.226 "nvme": [ 00:16:12.226 { 00:16:12.226 "ctrlr_data": { 00:16:12.226 "ana_reporting": false, 00:16:12.226 "cntlid": 1, 00:16:12.226 "firmware_revision": "24.09", 00:16:12.226 "model_number": "SPDK bdev Controller", 00:16:12.226 "multi_ctrlr": true, 00:16:12.226 "oacs": { 00:16:12.226 "firmware": 0, 00:16:12.226 "format": 0, 00:16:12.226 "ns_manage": 0, 00:16:12.226 "security": 0 00:16:12.226 }, 00:16:12.226 "serial_number": "00000000000000000000", 00:16:12.226 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:12.226 "vendor_id": "0x8086" 00:16:12.226 }, 00:16:12.226 "ns_data": { 00:16:12.226 "can_share": true, 00:16:12.226 "id": 1 00:16:12.226 }, 00:16:12.226 "trid": { 00:16:12.226 "adrfam": "IPv4", 00:16:12.226 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:12.226 "traddr": "10.0.0.2", 00:16:12.226 "trsvcid": "4420", 00:16:12.226 "trtype": "TCP" 00:16:12.226 }, 00:16:12.226 "vs": { 00:16:12.226 "nvme_version": "1.3" 00:16:12.226 } 00:16:12.226 } 00:16:12.226 ] 00:16:12.226 }, 00:16:12.226 "memory_domains": [ 00:16:12.226 { 00:16:12.226 "dma_device_id": "system", 00:16:12.226 "dma_device_type": 1 00:16:12.226 } 00:16:12.226 ], 00:16:12.226 "name": "nvme0n1", 00:16:12.226 "num_blocks": 2097152, 00:16:12.226 "product_name": "NVMe disk", 00:16:12.226 "supported_io_types": { 00:16:12.226 "abort": true, 00:16:12.226 "compare": true, 00:16:12.226 "compare_and_write": true, 00:16:12.226 "copy": true, 00:16:12.226 "flush": true, 00:16:12.226 "get_zone_info": false, 00:16:12.226 "nvme_admin": true, 00:16:12.226 "nvme_io": true, 00:16:12.226 "nvme_io_md": false, 00:16:12.226 "nvme_iov_md": false, 00:16:12.226 "read": true, 00:16:12.226 "reset": true, 00:16:12.226 "seek_data": false, 00:16:12.226 "seek_hole": false, 00:16:12.226 "unmap": false, 00:16:12.226 "write": true, 00:16:12.226 "write_zeroes": true, 00:16:12.226 "zcopy": false, 00:16:12.226 "zone_append": false, 00:16:12.226 "zone_management": false 00:16:12.226 }, 00:16:12.226 "uuid": "89baa7e1-9f2b-4d54-8f73-6e008862e7f7", 00:16:12.226 "zoned": false 00:16:12.226 } 00:16:12.226 ] 00:16:12.226 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.226 16:31:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:16:12.226 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.226 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:12.226 [2024-07-21 16:31:30.267904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:12.226 [2024-07-21 16:31:30.268015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x246bb00 (9): Bad file descriptor 00:16:12.226 [2024-07-21 16:31:30.400422] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:12.226 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.226 16:31:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:12.226 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.226 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:12.226 [ 00:16:12.226 { 00:16:12.226 "aliases": [ 00:16:12.226 "89baa7e1-9f2b-4d54-8f73-6e008862e7f7" 00:16:12.226 ], 00:16:12.226 "assigned_rate_limits": { 00:16:12.226 "r_mbytes_per_sec": 0, 00:16:12.226 "rw_ios_per_sec": 0, 00:16:12.226 "rw_mbytes_per_sec": 0, 00:16:12.226 "w_mbytes_per_sec": 0 00:16:12.226 }, 00:16:12.226 "block_size": 512, 00:16:12.226 "claimed": false, 00:16:12.226 "driver_specific": { 00:16:12.226 "mp_policy": "active_passive", 00:16:12.226 "nvme": [ 00:16:12.226 { 00:16:12.226 "ctrlr_data": { 00:16:12.226 "ana_reporting": false, 00:16:12.226 "cntlid": 2, 00:16:12.226 "firmware_revision": "24.09", 00:16:12.226 "model_number": "SPDK bdev Controller", 00:16:12.226 "multi_ctrlr": true, 00:16:12.226 "oacs": { 00:16:12.226 "firmware": 0, 00:16:12.226 "format": 0, 00:16:12.226 "ns_manage": 0, 00:16:12.226 "security": 0 00:16:12.226 }, 00:16:12.226 "serial_number": "00000000000000000000", 00:16:12.226 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:12.226 "vendor_id": "0x8086" 00:16:12.226 }, 00:16:12.226 "ns_data": { 00:16:12.226 "can_share": true, 00:16:12.226 "id": 1 00:16:12.226 }, 00:16:12.226 "trid": { 00:16:12.226 "adrfam": "IPv4", 00:16:12.226 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:12.226 "traddr": "10.0.0.2", 00:16:12.226 "trsvcid": "4420", 00:16:12.226 "trtype": "TCP" 00:16:12.226 }, 00:16:12.226 "vs": { 00:16:12.226 "nvme_version": "1.3" 00:16:12.226 } 00:16:12.226 } 00:16:12.226 ] 00:16:12.226 }, 00:16:12.226 "memory_domains": [ 00:16:12.226 { 00:16:12.226 "dma_device_id": "system", 00:16:12.226 "dma_device_type": 1 00:16:12.226 } 00:16:12.226 ], 00:16:12.226 "name": "nvme0n1", 00:16:12.226 "num_blocks": 2097152, 00:16:12.226 "product_name": "NVMe disk", 00:16:12.226 "supported_io_types": { 00:16:12.226 "abort": true, 00:16:12.226 "compare": true, 00:16:12.226 "compare_and_write": true, 00:16:12.226 "copy": true, 00:16:12.226 "flush": true, 00:16:12.226 "get_zone_info": false, 00:16:12.226 "nvme_admin": true, 00:16:12.226 "nvme_io": true, 00:16:12.226 "nvme_io_md": false, 00:16:12.226 "nvme_iov_md": false, 00:16:12.226 "read": true, 00:16:12.226 "reset": true, 00:16:12.226 "seek_data": false, 00:16:12.226 "seek_hole": false, 00:16:12.226 "unmap": false, 00:16:12.226 "write": true, 00:16:12.226 "write_zeroes": true, 00:16:12.226 "zcopy": false, 00:16:12.226 "zone_append": false, 00:16:12.226 "zone_management": false 00:16:12.226 }, 00:16:12.226 "uuid": "89baa7e1-9f2b-4d54-8f73-6e008862e7f7", 00:16:12.226 "zoned": false 00:16:12.226 } 00:16:12.226 ] 00:16:12.226 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.226 16:31:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:12.226 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.226 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:12.483 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.483 16:31:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:16:12.483 16:31:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.oixH7Shm5F 00:16:12.483 16:31:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:12.483 16:31:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.oixH7Shm5F 00:16:12.483 16:31:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:16:12.483 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.483 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:12.483 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.483 16:31:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:16:12.483 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.483 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:12.483 [2024-07-21 16:31:30.468039] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:12.483 [2024-07-21 16:31:30.468206] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:12.483 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.483 16:31:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oixH7Shm5F 00:16:12.483 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.483 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:12.483 [2024-07-21 16:31:30.476035] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:12.483 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.483 16:31:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oixH7Shm5F 00:16:12.483 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.483 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:12.483 [2024-07-21 16:31:30.484043] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:12.483 [2024-07-21 16:31:30.484129] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:12.483 nvme0n1 00:16:12.483 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.483 16:31:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:12.483 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.483 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:12.483 [ 00:16:12.483 { 00:16:12.483 "aliases": [ 00:16:12.483 "89baa7e1-9f2b-4d54-8f73-6e008862e7f7" 00:16:12.483 ], 00:16:12.483 "assigned_rate_limits": { 00:16:12.483 "r_mbytes_per_sec": 0, 00:16:12.483 "rw_ios_per_sec": 0, 00:16:12.483 "rw_mbytes_per_sec": 0, 00:16:12.483 "w_mbytes_per_sec": 0 00:16:12.483 }, 00:16:12.483 "block_size": 512, 00:16:12.483 "claimed": false, 00:16:12.483 "driver_specific": { 00:16:12.484 "mp_policy": "active_passive", 00:16:12.484 "nvme": [ 00:16:12.484 { 00:16:12.484 "ctrlr_data": { 00:16:12.484 "ana_reporting": false, 00:16:12.484 "cntlid": 3, 00:16:12.484 "firmware_revision": "24.09", 00:16:12.484 "model_number": "SPDK bdev Controller", 00:16:12.484 "multi_ctrlr": true, 00:16:12.484 "oacs": { 00:16:12.484 "firmware": 0, 00:16:12.484 "format": 0, 00:16:12.484 "ns_manage": 0, 00:16:12.484 "security": 0 00:16:12.484 }, 00:16:12.484 "serial_number": "00000000000000000000", 00:16:12.484 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:12.484 "vendor_id": "0x8086" 00:16:12.484 }, 00:16:12.484 "ns_data": { 00:16:12.484 "can_share": true, 00:16:12.484 "id": 1 00:16:12.484 }, 00:16:12.484 "trid": { 00:16:12.484 "adrfam": "IPv4", 00:16:12.484 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:12.484 "traddr": "10.0.0.2", 00:16:12.484 "trsvcid": "4421", 00:16:12.484 "trtype": "TCP" 00:16:12.484 }, 00:16:12.484 "vs": { 00:16:12.484 "nvme_version": "1.3" 00:16:12.484 } 00:16:12.484 } 00:16:12.484 ] 00:16:12.484 }, 00:16:12.484 "memory_domains": [ 00:16:12.484 { 00:16:12.484 "dma_device_id": "system", 00:16:12.484 "dma_device_type": 1 00:16:12.484 } 00:16:12.484 ], 00:16:12.484 "name": "nvme0n1", 00:16:12.484 "num_blocks": 2097152, 00:16:12.484 "product_name": "NVMe disk", 00:16:12.484 "supported_io_types": { 00:16:12.484 "abort": true, 00:16:12.484 "compare": true, 00:16:12.484 "compare_and_write": true, 00:16:12.484 "copy": true, 00:16:12.484 "flush": true, 00:16:12.484 "get_zone_info": false, 00:16:12.484 "nvme_admin": true, 00:16:12.484 "nvme_io": true, 00:16:12.484 "nvme_io_md": false, 00:16:12.484 "nvme_iov_md": false, 00:16:12.484 "read": true, 00:16:12.484 "reset": true, 00:16:12.484 "seek_data": false, 00:16:12.484 "seek_hole": false, 00:16:12.484 "unmap": false, 00:16:12.484 "write": true, 00:16:12.484 "write_zeroes": true, 00:16:12.484 "zcopy": false, 00:16:12.484 "zone_append": false, 00:16:12.484 "zone_management": false 00:16:12.484 }, 00:16:12.484 "uuid": "89baa7e1-9f2b-4d54-8f73-6e008862e7f7", 00:16:12.484 "zoned": false 00:16:12.484 } 00:16:12.484 ] 00:16:12.484 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.484 16:31:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:12.484 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.484 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:12.484 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.484 16:31:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.oixH7Shm5F 00:16:12.484 16:31:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:16:12.484 16:31:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:16:12.484 16:31:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:12.484 16:31:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:16:12.484 16:31:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:12.484 16:31:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:16:12.484 16:31:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:12.484 16:31:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:12.484 rmmod nvme_tcp 00:16:12.484 rmmod nvme_fabrics 00:16:12.484 rmmod nvme_keyring 00:16:12.742 16:31:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:12.742 16:31:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:16:12.742 16:31:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:16:12.742 16:31:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 86628 ']' 00:16:12.742 16:31:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 86628 00:16:12.742 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 86628 ']' 00:16:12.742 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 86628 00:16:12.742 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:16:12.742 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:12.742 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86628 00:16:12.742 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:12.742 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:12.742 killing process with pid 86628 00:16:12.742 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86628' 00:16:12.742 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 86628 00:16:12.742 [2024-07-21 16:31:30.728112] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:12.742 [2024-07-21 16:31:30.728146] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:12.742 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 86628 00:16:12.742 16:31:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:12.742 16:31:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:12.742 16:31:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:12.742 16:31:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:12.742 16:31:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:12.742 16:31:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.742 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.742 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.001 16:31:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:13.001 00:16:13.001 real 0m2.608s 00:16:13.001 user 0m2.354s 00:16:13.001 sys 0m0.633s 00:16:13.001 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:13.001 ************************************ 00:16:13.001 END TEST nvmf_async_init 00:16:13.001 ************************************ 00:16:13.001 16:31:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:13.001 16:31:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:13.001 16:31:30 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:16:13.001 16:31:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:13.001 16:31:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:13.001 16:31:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:13.001 ************************************ 00:16:13.001 START TEST dma 00:16:13.001 ************************************ 00:16:13.001 16:31:31 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:16:13.001 * Looking for test storage... 00:16:13.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:13.001 16:31:31 nvmf_tcp.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:13.001 16:31:31 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:16:13.001 16:31:31 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.001 16:31:31 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.001 16:31:31 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.001 16:31:31 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.001 16:31:31 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.001 16:31:31 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.001 16:31:31 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.001 16:31:31 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.001 16:31:31 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.001 16:31:31 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.001 16:31:31 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:16:13.001 16:31:31 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:16:13.001 16:31:31 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.001 16:31:31 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.001 16:31:31 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:13.001 16:31:31 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.001 16:31:31 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:13.001 16:31:31 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.001 16:31:31 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.001 16:31:31 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.001 16:31:31 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.001 16:31:31 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.001 16:31:31 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.001 16:31:31 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:16:13.001 16:31:31 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.001 16:31:31 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:16:13.001 16:31:31 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:13.001 16:31:31 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:13.001 16:31:31 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.001 16:31:31 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.001 16:31:31 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.001 16:31:31 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:13.001 16:31:31 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:13.001 16:31:31 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:13.001 16:31:31 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:16:13.001 16:31:31 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:16:13.001 00:16:13.001 real 0m0.103s 00:16:13.001 user 0m0.047s 00:16:13.001 sys 0m0.061s 00:16:13.001 16:31:31 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:13.001 16:31:31 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:16:13.001 ************************************ 00:16:13.001 END TEST dma 00:16:13.001 ************************************ 00:16:13.001 16:31:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:13.001 16:31:31 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:13.001 16:31:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:13.001 16:31:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:13.001 16:31:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:13.001 ************************************ 00:16:13.001 START TEST nvmf_identify 00:16:13.001 ************************************ 00:16:13.001 16:31:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:13.261 * Looking for test storage... 00:16:13.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:13.262 Cannot find device "nvmf_tgt_br" 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:13.262 Cannot find device "nvmf_tgt_br2" 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:13.262 Cannot find device "nvmf_tgt_br" 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:13.262 Cannot find device "nvmf_tgt_br2" 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:13.262 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:13.262 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:13.262 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:13.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:13.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:16:13.524 00:16:13.524 --- 10.0.0.2 ping statistics --- 00:16:13.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.524 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:13.524 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:13.524 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:16:13.524 00:16:13.524 --- 10.0.0.3 ping statistics --- 00:16:13.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.524 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:13.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:13.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:13.524 00:16:13.524 --- 10.0.0.1 ping statistics --- 00:16:13.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.524 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=86898 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 86898 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 86898 ']' 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:13.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:13.524 16:31:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:13.782 [2024-07-21 16:31:31.737665] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:16:13.782 [2024-07-21 16:31:31.737765] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.782 [2024-07-21 16:31:31.879865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:14.040 [2024-07-21 16:31:31.996845] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.040 [2024-07-21 16:31:31.996932] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.040 [2024-07-21 16:31:31.996951] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:14.040 [2024-07-21 16:31:31.996962] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:14.040 [2024-07-21 16:31:31.996972] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.040 [2024-07-21 16:31:31.997137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.040 [2024-07-21 16:31:31.997904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:14.040 [2024-07-21 16:31:31.998017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:14.040 [2024-07-21 16:31:31.998023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.606 16:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:14.606 16:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:16:14.606 16:31:32 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:14.606 16:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.606 16:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:14.606 [2024-07-21 16:31:32.738773] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:14.606 16:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.606 16:31:32 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:16:14.606 16:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:14.606 16:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:14.606 16:31:32 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:14.606 16:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.606 16:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:14.865 Malloc0 00:16:14.865 16:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.865 16:31:32 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:14.865 16:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.865 16:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:14.865 16:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.865 16:31:32 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:16:14.865 16:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.865 16:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:14.865 16:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.865 16:31:32 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:14.865 16:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.865 16:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:14.865 [2024-07-21 16:31:32.845340] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.865 16:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.865 16:31:32 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:14.865 16:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.865 16:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:14.865 16:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.865 16:31:32 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:16:14.865 16:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.865 16:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:14.865 [ 00:16:14.865 { 00:16:14.865 "allow_any_host": true, 00:16:14.865 "hosts": [], 00:16:14.865 "listen_addresses": [ 00:16:14.865 { 00:16:14.865 "adrfam": "IPv4", 00:16:14.865 "traddr": "10.0.0.2", 00:16:14.865 "trsvcid": "4420", 00:16:14.865 "trtype": "TCP" 00:16:14.865 } 00:16:14.865 ], 00:16:14.865 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:14.865 "subtype": "Discovery" 00:16:14.865 }, 00:16:14.865 { 00:16:14.865 "allow_any_host": true, 00:16:14.865 "hosts": [], 00:16:14.865 "listen_addresses": [ 00:16:14.865 { 00:16:14.865 "adrfam": "IPv4", 00:16:14.865 "traddr": "10.0.0.2", 00:16:14.865 "trsvcid": "4420", 00:16:14.865 "trtype": "TCP" 00:16:14.865 } 00:16:14.865 ], 00:16:14.865 "max_cntlid": 65519, 00:16:14.865 "max_namespaces": 32, 00:16:14.865 "min_cntlid": 1, 00:16:14.865 "model_number": "SPDK bdev Controller", 00:16:14.865 "namespaces": [ 00:16:14.865 { 00:16:14.865 "bdev_name": "Malloc0", 00:16:14.865 "eui64": "ABCDEF0123456789", 00:16:14.865 "name": "Malloc0", 00:16:14.865 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:16:14.865 "nsid": 1, 00:16:14.865 "uuid": "371d8c15-f2a2-4174-aa37-db6d9dd1199b" 00:16:14.865 } 00:16:14.865 ], 00:16:14.865 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:14.865 "serial_number": "SPDK00000000000001", 00:16:14.865 "subtype": "NVMe" 00:16:14.865 } 00:16:14.865 ] 00:16:14.865 16:31:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.865 16:31:32 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:16:14.865 [2024-07-21 16:31:32.899294] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:16:14.866 [2024-07-21 16:31:32.899348] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86951 ] 00:16:14.866 [2024-07-21 16:31:33.036024] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:16:14.866 [2024-07-21 16:31:33.036096] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:14.866 [2024-07-21 16:31:33.036103] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:14.866 [2024-07-21 16:31:33.036118] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:14.866 [2024-07-21 16:31:33.036126] sock.c: 353:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:14.866 [2024-07-21 16:31:33.036290] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:16:14.866 [2024-07-21 16:31:33.036348] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xf1ea60 0 00:16:14.866 [2024-07-21 16:31:33.042285] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:14.866 [2024-07-21 16:31:33.042308] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:14.866 [2024-07-21 16:31:33.042322] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:14.866 [2024-07-21 16:31:33.042326] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:14.866 [2024-07-21 16:31:33.042375] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:14.866 [2024-07-21 16:31:33.042383] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:14.866 [2024-07-21 16:31:33.042387] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf1ea60) 00:16:14.866 [2024-07-21 16:31:33.042402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:14.866 [2024-07-21 16:31:33.042430] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61840, cid 0, qid 0 00:16:14.866 [2024-07-21 16:31:33.050278] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:14.866 [2024-07-21 16:31:33.050295] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:14.866 [2024-07-21 16:31:33.050299] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:14.866 [2024-07-21 16:31:33.050304] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61840) on tqpair=0xf1ea60 00:16:14.866 [2024-07-21 16:31:33.050316] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:14.866 [2024-07-21 16:31:33.050323] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:16:14.866 [2024-07-21 16:31:33.050330] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:16:14.866 [2024-07-21 16:31:33.050348] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:14.866 [2024-07-21 16:31:33.050353] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:14.866 [2024-07-21 16:31:33.050357] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf1ea60) 00:16:14.866 [2024-07-21 16:31:33.050366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.866 [2024-07-21 16:31:33.050391] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61840, cid 0, qid 0 00:16:14.866 [2024-07-21 16:31:33.050465] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:14.866 [2024-07-21 16:31:33.050472] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:14.866 [2024-07-21 16:31:33.050475] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:14.866 [2024-07-21 16:31:33.050479] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61840) on tqpair=0xf1ea60 00:16:14.866 [2024-07-21 16:31:33.050485] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:16:14.866 [2024-07-21 16:31:33.050492] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:16:14.866 [2024-07-21 16:31:33.050500] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:14.866 [2024-07-21 16:31:33.050504] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:14.866 [2024-07-21 16:31:33.050507] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf1ea60) 00:16:14.866 [2024-07-21 16:31:33.050514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.866 [2024-07-21 16:31:33.050531] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61840, cid 0, qid 0 00:16:14.866 [2024-07-21 16:31:33.050580] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:14.866 [2024-07-21 16:31:33.050587] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:14.866 [2024-07-21 16:31:33.050590] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:14.866 [2024-07-21 16:31:33.050594] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61840) on tqpair=0xf1ea60 00:16:14.866 [2024-07-21 16:31:33.050600] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:16:14.866 [2024-07-21 16:31:33.050608] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:16:14.866 [2024-07-21 16:31:33.050615] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:14.866 [2024-07-21 16:31:33.050619] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:14.866 [2024-07-21 16:31:33.050623] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf1ea60) 00:16:14.866 [2024-07-21 16:31:33.050629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.866 [2024-07-21 16:31:33.050646] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61840, cid 0, qid 0 00:16:14.866 [2024-07-21 16:31:33.050697] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:14.866 [2024-07-21 16:31:33.050705] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:14.866 [2024-07-21 16:31:33.050708] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:14.866 [2024-07-21 16:31:33.050712] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61840) on tqpair=0xf1ea60 00:16:14.866 [2024-07-21 16:31:33.050717] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:14.866 [2024-07-21 16:31:33.050727] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:14.866 [2024-07-21 16:31:33.050731] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:14.866 [2024-07-21 16:31:33.050735] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf1ea60) 00:16:14.866 [2024-07-21 16:31:33.050741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.866 [2024-07-21 16:31:33.050757] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61840, cid 0, qid 0 00:16:14.866 [2024-07-21 16:31:33.050805] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:14.866 [2024-07-21 16:31:33.050811] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:14.866 [2024-07-21 16:31:33.050815] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:14.866 [2024-07-21 16:31:33.050818] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61840) on tqpair=0xf1ea60 00:16:14.866 [2024-07-21 16:31:33.050823] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:16:14.866 [2024-07-21 16:31:33.050829] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:16:14.866 [2024-07-21 16:31:33.050836] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:14.866 [2024-07-21 16:31:33.050941] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:16:14.866 [2024-07-21 16:31:33.050946] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:14.866 [2024-07-21 16:31:33.050956] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:14.866 [2024-07-21 16:31:33.050960] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:14.866 [2024-07-21 16:31:33.050963] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf1ea60) 00:16:14.866 [2024-07-21 16:31:33.050971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.866 [2024-07-21 16:31:33.050987] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61840, cid 0, qid 0 00:16:14.866 [2024-07-21 16:31:33.051035] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:14.866 [2024-07-21 16:31:33.051041] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:14.866 [2024-07-21 16:31:33.051045] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:14.866 [2024-07-21 16:31:33.051048] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61840) on tqpair=0xf1ea60 00:16:14.866 [2024-07-21 16:31:33.051053] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:14.866 [2024-07-21 16:31:33.051062] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:14.866 [2024-07-21 16:31:33.051066] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:14.866 [2024-07-21 16:31:33.051070] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf1ea60) 00:16:14.866 [2024-07-21 16:31:33.051076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.866 [2024-07-21 16:31:33.051092] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61840, cid 0, qid 0 00:16:14.866 [2024-07-21 16:31:33.051141] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:14.866 [2024-07-21 16:31:33.051147] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:14.866 [2024-07-21 16:31:33.051150] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:14.866 [2024-07-21 16:31:33.051154] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61840) on tqpair=0xf1ea60 00:16:14.866 [2024-07-21 16:31:33.051159] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:14.866 [2024-07-21 16:31:33.051164] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:16:14.866 [2024-07-21 16:31:33.051171] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:16:14.866 [2024-07-21 16:31:33.051180] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:16:14.866 [2024-07-21 16:31:33.051191] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:14.866 [2024-07-21 16:31:33.051195] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf1ea60) 00:16:14.866 [2024-07-21 16:31:33.051203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.866 [2024-07-21 16:31:33.051220] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61840, cid 0, qid 0 00:16:14.866 [2024-07-21 16:31:33.051326] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:14.866 [2024-07-21 16:31:33.051334] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:14.866 [2024-07-21 16:31:33.051338] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:14.866 [2024-07-21 16:31:33.051342] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf1ea60): datao=0, datal=4096, cccid=0 00:16:14.866 [2024-07-21 16:31:33.051347] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf61840) on tqpair(0xf1ea60): expected_datao=0, payload_size=4096 00:16:14.866 [2024-07-21 16:31:33.051352] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:14.866 [2024-07-21 16:31:33.051360] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:14.866 [2024-07-21 16:31:33.051364] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:14.866 [2024-07-21 16:31:33.051372] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:14.866 [2024-07-21 16:31:33.051377] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:14.867 [2024-07-21 16:31:33.051381] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:14.867 [2024-07-21 16:31:33.051384] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61840) on tqpair=0xf1ea60 00:16:14.867 [2024-07-21 16:31:33.051393] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:16:14.867 [2024-07-21 16:31:33.051398] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:16:14.867 [2024-07-21 16:31:33.051403] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:16:14.867 [2024-07-21 16:31:33.051409] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:16:14.867 [2024-07-21 16:31:33.051414] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:16:14.867 [2024-07-21 16:31:33.051419] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:16:14.867 [2024-07-21 16:31:33.051427] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:16:14.867 [2024-07-21 16:31:33.051436] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:14.867 [2024-07-21 16:31:33.051440] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:14.867 [2024-07-21 16:31:33.051443] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf1ea60) 00:16:14.867 [2024-07-21 16:31:33.051451] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:14.867 [2024-07-21 16:31:33.051471] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61840, cid 0, qid 0 00:16:14.867 [2024-07-21 16:31:33.051531] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:14.867 [2024-07-21 16:31:33.051537] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:14.867 [2024-07-21 16:31:33.051541] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:14.867 [2024-07-21 16:31:33.051545] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61840) on tqpair=0xf1ea60 00:16:14.867 [2024-07-21 16:31:33.051553] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:14.867 [2024-07-21 16:31:33.051557] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:14.867 [2024-07-21 16:31:33.051560] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf1ea60) 00:16:14.867 [2024-07-21 16:31:33.051566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.867 [2024-07-21 16:31:33.051572] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:14.867 [2024-07-21 16:31:33.051576] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:14.867 [2024-07-21 16:31:33.051579] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xf1ea60) 00:16:14.867 [2024-07-21 16:31:33.051585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.867 [2024-07-21 16:31:33.051591] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:14.867 [2024-07-21 16:31:33.051594] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:14.867 [2024-07-21 16:31:33.051597] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xf1ea60) 00:16:14.867 [2024-07-21 16:31:33.051602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.867 [2024-07-21 16:31:33.051608] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:14.867 [2024-07-21 16:31:33.051611] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:14.867 [2024-07-21 16:31:33.051615] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf1ea60) 00:16:14.867 [2024-07-21 16:31:33.051625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.867 [2024-07-21 16:31:33.051630] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:16:14.867 [2024-07-21 16:31:33.051642] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:14.867 [2024-07-21 16:31:33.051650] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:14.867 [2024-07-21 16:31:33.051653] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf1ea60) 00:16:14.867 [2024-07-21 16:31:33.051667] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.867 [2024-07-21 16:31:33.051686] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61840, cid 0, qid 0 00:16:14.867 [2024-07-21 16:31:33.051692] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf619c0, cid 1, qid 0 00:16:14.867 [2024-07-21 16:31:33.051697] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61b40, cid 2, qid 0 00:16:14.867 [2024-07-21 16:31:33.051701] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61cc0, cid 3, qid 0 00:16:14.867 [2024-07-21 16:31:33.051706] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61e40, cid 4, qid 0 00:16:14.867 [2024-07-21 16:31:33.051790] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:14.867 [2024-07-21 16:31:33.051796] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:14.867 [2024-07-21 16:31:33.051799] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:14.867 [2024-07-21 16:31:33.051803] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61e40) on tqpair=0xf1ea60 00:16:14.867 [2024-07-21 16:31:33.051808] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:16:14.867 [2024-07-21 16:31:33.051817] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:16:14.867 [2024-07-21 16:31:33.051829] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:14.867 [2024-07-21 16:31:33.051833] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf1ea60) 00:16:14.867 [2024-07-21 16:31:33.051840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.867 [2024-07-21 16:31:33.051857] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61e40, cid 4, qid 0 00:16:14.867 [2024-07-21 16:31:33.051912] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:14.867 [2024-07-21 16:31:33.051918] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:14.867 [2024-07-21 16:31:33.051922] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:14.867 [2024-07-21 16:31:33.051925] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf1ea60): datao=0, datal=4096, cccid=4 00:16:14.867 [2024-07-21 16:31:33.051930] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf61e40) on tqpair(0xf1ea60): expected_datao=0, payload_size=4096 00:16:14.867 [2024-07-21 16:31:33.051934] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:14.867 [2024-07-21 16:31:33.051941] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:14.867 [2024-07-21 16:31:33.051944] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:14.867 [2024-07-21 16:31:33.051952] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:14.867 [2024-07-21 16:31:33.051957] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:14.867 [2024-07-21 16:31:33.051960] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:14.867 [2024-07-21 16:31:33.051964] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61e40) on tqpair=0xf1ea60 00:16:14.867 [2024-07-21 16:31:33.051976] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:16:14.867 [2024-07-21 16:31:33.052007] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:14.867 [2024-07-21 16:31:33.052013] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf1ea60) 00:16:14.867 [2024-07-21 16:31:33.052020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:14.867 [2024-07-21 16:31:33.052027] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:14.867 [2024-07-21 16:31:33.052031] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:14.867 [2024-07-21 16:31:33.052034] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf1ea60) 00:16:14.867 [2024-07-21 16:31:33.052040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.867 [2024-07-21 16:31:33.052064] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61e40, cid 4, qid 0 00:16:14.867 [2024-07-21 16:31:33.052071] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61fc0, cid 5, qid 0 00:16:14.867 [2024-07-21 16:31:33.052158] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:14.867 [2024-07-21 16:31:33.052165] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:14.867 [2024-07-21 16:31:33.052168] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:14.867 [2024-07-21 16:31:33.052171] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf1ea60): datao=0, datal=1024, cccid=4 00:16:14.867 [2024-07-21 16:31:33.052176] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf61e40) on tqpair(0xf1ea60): expected_datao=0, payload_size=1024 00:16:14.867 [2024-07-21 16:31:33.052180] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:14.867 [2024-07-21 16:31:33.052186] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:14.867 [2024-07-21 16:31:33.052190] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:14.867 [2024-07-21 16:31:33.052195] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:14.867 [2024-07-21 16:31:33.052200] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:14.867 [2024-07-21 16:31:33.052206] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:14.867 [2024-07-21 16:31:33.052210] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61fc0) on tqpair=0xf1ea60 00:16:15.129 [2024-07-21 16:31:33.093320] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.129 [2024-07-21 16:31:33.093338] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.129 [2024-07-21 16:31:33.093342] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.129 [2024-07-21 16:31:33.093347] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61e40) on tqpair=0xf1ea60 00:16:15.129 [2024-07-21 16:31:33.093361] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.129 [2024-07-21 16:31:33.093365] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf1ea60) 00:16:15.129 [2024-07-21 16:31:33.093373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.129 [2024-07-21 16:31:33.093402] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61e40, cid 4, qid 0 00:16:15.129 [2024-07-21 16:31:33.093473] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.129 [2024-07-21 16:31:33.093479] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.129 [2024-07-21 16:31:33.093483] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.129 [2024-07-21 16:31:33.093486] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf1ea60): datao=0, datal=3072, cccid=4 00:16:15.129 [2024-07-21 16:31:33.093490] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf61e40) on tqpair(0xf1ea60): expected_datao=0, payload_size=3072 00:16:15.129 [2024-07-21 16:31:33.093495] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.129 [2024-07-21 16:31:33.093501] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.129 [2024-07-21 16:31:33.093505] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.129 [2024-07-21 16:31:33.093513] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.129 [2024-07-21 16:31:33.093518] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.129 [2024-07-21 16:31:33.093521] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.129 [2024-07-21 16:31:33.093525] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61e40) on tqpair=0xf1ea60 00:16:15.129 [2024-07-21 16:31:33.093535] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.129 [2024-07-21 16:31:33.093540] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf1ea60) 00:16:15.129 [2024-07-21 16:31:33.093546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.129 [2024-07-21 16:31:33.093570] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61e40, cid 4, qid 0 00:16:15.129 [2024-07-21 16:31:33.093636] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.129 [2024-07-21 16:31:33.093642] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.129 [2024-07-21 16:31:33.093645] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.129 [2024-07-21 16:31:33.093648] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf1ea60): datao=0, datal=8, cccid=4 00:16:15.129 [2024-07-21 16:31:33.093653] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf61e40) on tqpair(0xf1ea60): expected_datao=0, payload_size=8 00:16:15.129 [2024-07-21 16:31:33.093657] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.129 [2024-07-21 16:31:33.093663] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.129 [2024-07-21 16:31:33.093666] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.129 ===================================================== 00:16:15.129 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:15.129 ===================================================== 00:16:15.129 Controller Capabilities/Features 00:16:15.129 ================================ 00:16:15.129 Vendor ID: 0000 00:16:15.129 Subsystem Vendor ID: 0000 00:16:15.129 Serial Number: .................... 00:16:15.129 Model Number: ........................................ 00:16:15.129 Firmware Version: 24.09 00:16:15.129 Recommended Arb Burst: 0 00:16:15.129 IEEE OUI Identifier: 00 00 00 00:16:15.129 Multi-path I/O 00:16:15.129 May have multiple subsystem ports: No 00:16:15.129 May have multiple controllers: No 00:16:15.129 Associated with SR-IOV VF: No 00:16:15.129 Max Data Transfer Size: 131072 00:16:15.129 Max Number of Namespaces: 0 00:16:15.129 Max Number of I/O Queues: 1024 00:16:15.129 NVMe Specification Version (VS): 1.3 00:16:15.129 NVMe Specification Version (Identify): 1.3 00:16:15.129 Maximum Queue Entries: 128 00:16:15.129 Contiguous Queues Required: Yes 00:16:15.129 Arbitration Mechanisms Supported 00:16:15.129 Weighted Round Robin: Not Supported 00:16:15.129 Vendor Specific: Not Supported 00:16:15.129 Reset Timeout: 15000 ms 00:16:15.129 Doorbell Stride: 4 bytes 00:16:15.129 NVM Subsystem Reset: Not Supported 00:16:15.129 Command Sets Supported 00:16:15.129 NVM Command Set: Supported 00:16:15.129 Boot Partition: Not Supported 00:16:15.129 Memory Page Size Minimum: 4096 bytes 00:16:15.129 Memory Page Size Maximum: 4096 bytes 00:16:15.129 Persistent Memory Region: Not Supported 00:16:15.129 Optional Asynchronous Events Supported 00:16:15.129 Namespace Attribute Notices: Not Supported 00:16:15.129 Firmware Activation Notices: Not Supported 00:16:15.129 ANA Change Notices: Not Supported 00:16:15.129 PLE Aggregate Log Change Notices: Not Supported 00:16:15.129 LBA Status Info Alert Notices: Not Supported 00:16:15.129 EGE Aggregate Log Change Notices: Not Supported 00:16:15.130 Normal NVM Subsystem Shutdown event: Not Supported 00:16:15.130 Zone Descriptor Change Notices: Not Supported 00:16:15.130 Discovery Log Change Notices: Supported 00:16:15.130 Controller Attributes 00:16:15.130 128-bit Host Identifier: Not Supported 00:16:15.130 Non-Operational Permissive Mode: Not Supported 00:16:15.130 NVM Sets: Not Supported 00:16:15.130 Read Recovery Levels: Not Supported 00:16:15.130 Endurance Groups: Not Supported 00:16:15.130 Predictable Latency Mode: Not Supported 00:16:15.130 Traffic Based Keep ALive: Not Supported 00:16:15.130 Namespace Granularity: Not Supported 00:16:15.130 SQ Associations: Not Supported 00:16:15.130 UUID List: Not Supported 00:16:15.130 Multi-Domain Subsystem: Not Supported 00:16:15.130 Fixed Capacity Management: Not Supported 00:16:15.130 Variable Capacity Management: Not Supported 00:16:15.130 Delete Endurance Group: Not Supported 00:16:15.130 Delete NVM Set: Not Supported 00:16:15.130 Extended LBA Formats Supported: Not Supported 00:16:15.130 Flexible Data Placement Supported: Not Supported 00:16:15.130 00:16:15.130 Controller Memory Buffer Support 00:16:15.130 ================================ 00:16:15.130 Supported: No 00:16:15.130 00:16:15.130 Persistent Memory Region Support 00:16:15.130 ================================ 00:16:15.130 Supported: No 00:16:15.130 00:16:15.130 Admin Command Set Attributes 00:16:15.130 ============================ 00:16:15.130 Security Send/Receive: Not Supported 00:16:15.130 Format NVM: Not Supported 00:16:15.130 Firmware Activate/Download: Not Supported 00:16:15.130 Namespace Management: Not Supported 00:16:15.130 Device Self-Test: Not Supported 00:16:15.130 Directives: Not Supported 00:16:15.130 NVMe-MI: Not Supported 00:16:15.130 Virtualization Management: Not Supported 00:16:15.130 Doorbell Buffer Config: Not Supported 00:16:15.130 Get LBA Status Capability: Not Supported 00:16:15.130 Command & Feature Lockdown Capability: Not Supported 00:16:15.130 Abort Command Limit: 1 00:16:15.130 Async Event Request Limit: 4 00:16:15.130 Number of Firmware Slots: N/A 00:16:15.130 Firmware Slot 1 Read-Only: N/A 00:16:15.130 Firm[2024-07-21 16:31:33.135366] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.130 [2024-07-21 16:31:33.135388] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.130 [2024-07-21 16:31:33.135392] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.130 [2024-07-21 16:31:33.135396] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61e40) on tqpair=0xf1ea60 00:16:15.130 ware Activation Without Reset: N/A 00:16:15.130 Multiple Update Detection Support: N/A 00:16:15.130 Firmware Update Granularity: No Information Provided 00:16:15.130 Per-Namespace SMART Log: No 00:16:15.130 Asymmetric Namespace Access Log Page: Not Supported 00:16:15.130 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:15.130 Command Effects Log Page: Not Supported 00:16:15.130 Get Log Page Extended Data: Supported 00:16:15.130 Telemetry Log Pages: Not Supported 00:16:15.130 Persistent Event Log Pages: Not Supported 00:16:15.130 Supported Log Pages Log Page: May Support 00:16:15.130 Commands Supported & Effects Log Page: Not Supported 00:16:15.130 Feature Identifiers & Effects Log Page:May Support 00:16:15.130 NVMe-MI Commands & Effects Log Page: May Support 00:16:15.130 Data Area 4 for Telemetry Log: Not Supported 00:16:15.130 Error Log Page Entries Supported: 128 00:16:15.130 Keep Alive: Not Supported 00:16:15.130 00:16:15.130 NVM Command Set Attributes 00:16:15.130 ========================== 00:16:15.130 Submission Queue Entry Size 00:16:15.130 Max: 1 00:16:15.130 Min: 1 00:16:15.130 Completion Queue Entry Size 00:16:15.130 Max: 1 00:16:15.130 Min: 1 00:16:15.130 Number of Namespaces: 0 00:16:15.130 Compare Command: Not Supported 00:16:15.130 Write Uncorrectable Command: Not Supported 00:16:15.130 Dataset Management Command: Not Supported 00:16:15.130 Write Zeroes Command: Not Supported 00:16:15.130 Set Features Save Field: Not Supported 00:16:15.130 Reservations: Not Supported 00:16:15.130 Timestamp: Not Supported 00:16:15.130 Copy: Not Supported 00:16:15.130 Volatile Write Cache: Not Present 00:16:15.130 Atomic Write Unit (Normal): 1 00:16:15.130 Atomic Write Unit (PFail): 1 00:16:15.130 Atomic Compare & Write Unit: 1 00:16:15.130 Fused Compare & Write: Supported 00:16:15.130 Scatter-Gather List 00:16:15.130 SGL Command Set: Supported 00:16:15.130 SGL Keyed: Supported 00:16:15.130 SGL Bit Bucket Descriptor: Not Supported 00:16:15.130 SGL Metadata Pointer: Not Supported 00:16:15.130 Oversized SGL: Not Supported 00:16:15.130 SGL Metadata Address: Not Supported 00:16:15.130 SGL Offset: Supported 00:16:15.130 Transport SGL Data Block: Not Supported 00:16:15.130 Replay Protected Memory Block: Not Supported 00:16:15.130 00:16:15.130 Firmware Slot Information 00:16:15.130 ========================= 00:16:15.130 Active slot: 0 00:16:15.130 00:16:15.130 00:16:15.130 Error Log 00:16:15.130 ========= 00:16:15.130 00:16:15.130 Active Namespaces 00:16:15.130 ================= 00:16:15.130 Discovery Log Page 00:16:15.130 ================== 00:16:15.130 Generation Counter: 2 00:16:15.130 Number of Records: 2 00:16:15.130 Record Format: 0 00:16:15.130 00:16:15.130 Discovery Log Entry 0 00:16:15.130 ---------------------- 00:16:15.130 Transport Type: 3 (TCP) 00:16:15.130 Address Family: 1 (IPv4) 00:16:15.130 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:15.130 Entry Flags: 00:16:15.130 Duplicate Returned Information: 1 00:16:15.130 Explicit Persistent Connection Support for Discovery: 1 00:16:15.130 Transport Requirements: 00:16:15.130 Secure Channel: Not Required 00:16:15.130 Port ID: 0 (0x0000) 00:16:15.130 Controller ID: 65535 (0xffff) 00:16:15.130 Admin Max SQ Size: 128 00:16:15.130 Transport Service Identifier: 4420 00:16:15.130 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:15.130 Transport Address: 10.0.0.2 00:16:15.130 Discovery Log Entry 1 00:16:15.130 ---------------------- 00:16:15.130 Transport Type: 3 (TCP) 00:16:15.130 Address Family: 1 (IPv4) 00:16:15.130 Subsystem Type: 2 (NVM Subsystem) 00:16:15.130 Entry Flags: 00:16:15.130 Duplicate Returned Information: 0 00:16:15.130 Explicit Persistent Connection Support for Discovery: 0 00:16:15.130 Transport Requirements: 00:16:15.130 Secure Channel: Not Required 00:16:15.130 Port ID: 0 (0x0000) 00:16:15.130 Controller ID: 65535 (0xffff) 00:16:15.130 Admin Max SQ Size: 128 00:16:15.130 Transport Service Identifier: 4420 00:16:15.130 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:16:15.130 Transport Address: 10.0.0.2 [2024-07-21 16:31:33.135522] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:16:15.130 [2024-07-21 16:31:33.135538] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61840) on tqpair=0xf1ea60 00:16:15.130 [2024-07-21 16:31:33.135546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.130 [2024-07-21 16:31:33.135552] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf619c0) on tqpair=0xf1ea60 00:16:15.130 [2024-07-21 16:31:33.135556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.130 [2024-07-21 16:31:33.135561] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61b40) on tqpair=0xf1ea60 00:16:15.130 [2024-07-21 16:31:33.135565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.130 [2024-07-21 16:31:33.135570] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61cc0) on tqpair=0xf1ea60 00:16:15.130 [2024-07-21 16:31:33.135574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.130 [2024-07-21 16:31:33.135583] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.130 [2024-07-21 16:31:33.135588] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.130 [2024-07-21 16:31:33.135591] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf1ea60) 00:16:15.130 [2024-07-21 16:31:33.135599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.130 [2024-07-21 16:31:33.135624] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61cc0, cid 3, qid 0 00:16:15.130 [2024-07-21 16:31:33.135668] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.130 [2024-07-21 16:31:33.135675] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.130 [2024-07-21 16:31:33.135678] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.130 [2024-07-21 16:31:33.135682] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61cc0) on tqpair=0xf1ea60 00:16:15.130 [2024-07-21 16:31:33.135690] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.130 [2024-07-21 16:31:33.135694] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.130 [2024-07-21 16:31:33.135697] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf1ea60) 00:16:15.130 [2024-07-21 16:31:33.135704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.130 [2024-07-21 16:31:33.135736] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61cc0, cid 3, qid 0 00:16:15.130 [2024-07-21 16:31:33.135802] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.130 [2024-07-21 16:31:33.135808] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.130 [2024-07-21 16:31:33.135811] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.130 [2024-07-21 16:31:33.135815] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61cc0) on tqpair=0xf1ea60 00:16:15.130 [2024-07-21 16:31:33.135820] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:16:15.130 [2024-07-21 16:31:33.135825] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:16:15.131 [2024-07-21 16:31:33.135834] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.135838] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.135842] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf1ea60) 00:16:15.131 [2024-07-21 16:31:33.135849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.131 [2024-07-21 16:31:33.135865] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61cc0, cid 3, qid 0 00:16:15.131 [2024-07-21 16:31:33.135931] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.131 [2024-07-21 16:31:33.135937] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.131 [2024-07-21 16:31:33.135940] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.135944] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61cc0) on tqpair=0xf1ea60 00:16:15.131 [2024-07-21 16:31:33.135954] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.135959] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.135962] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf1ea60) 00:16:15.131 [2024-07-21 16:31:33.135969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.131 [2024-07-21 16:31:33.135985] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61cc0, cid 3, qid 0 00:16:15.131 [2024-07-21 16:31:33.136037] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.131 [2024-07-21 16:31:33.136043] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.131 [2024-07-21 16:31:33.136046] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.136050] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61cc0) on tqpair=0xf1ea60 00:16:15.131 [2024-07-21 16:31:33.136059] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.136064] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.136067] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf1ea60) 00:16:15.131 [2024-07-21 16:31:33.136074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.131 [2024-07-21 16:31:33.136090] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61cc0, cid 3, qid 0 00:16:15.131 [2024-07-21 16:31:33.136138] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.131 [2024-07-21 16:31:33.136144] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.131 [2024-07-21 16:31:33.136147] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.136151] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61cc0) on tqpair=0xf1ea60 00:16:15.131 [2024-07-21 16:31:33.136160] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.136165] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.136168] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf1ea60) 00:16:15.131 [2024-07-21 16:31:33.136175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.131 [2024-07-21 16:31:33.136206] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61cc0, cid 3, qid 0 00:16:15.131 [2024-07-21 16:31:33.136254] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.131 [2024-07-21 16:31:33.136261] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.131 [2024-07-21 16:31:33.136264] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.136268] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61cc0) on tqpair=0xf1ea60 00:16:15.131 [2024-07-21 16:31:33.136278] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.136282] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.136286] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf1ea60) 00:16:15.131 [2024-07-21 16:31:33.136304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.131 [2024-07-21 16:31:33.136325] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61cc0, cid 3, qid 0 00:16:15.131 [2024-07-21 16:31:33.136388] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.131 [2024-07-21 16:31:33.136395] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.131 [2024-07-21 16:31:33.136398] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.136402] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61cc0) on tqpair=0xf1ea60 00:16:15.131 [2024-07-21 16:31:33.136412] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.136417] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.136420] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf1ea60) 00:16:15.131 [2024-07-21 16:31:33.136427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.131 [2024-07-21 16:31:33.136444] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61cc0, cid 3, qid 0 00:16:15.131 [2024-07-21 16:31:33.136510] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.131 [2024-07-21 16:31:33.136516] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.131 [2024-07-21 16:31:33.136520] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.136524] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61cc0) on tqpair=0xf1ea60 00:16:15.131 [2024-07-21 16:31:33.136533] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.136538] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.136541] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf1ea60) 00:16:15.131 [2024-07-21 16:31:33.136548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.131 [2024-07-21 16:31:33.136564] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61cc0, cid 3, qid 0 00:16:15.131 [2024-07-21 16:31:33.136617] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.131 [2024-07-21 16:31:33.136623] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.131 [2024-07-21 16:31:33.136626] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.136630] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61cc0) on tqpair=0xf1ea60 00:16:15.131 [2024-07-21 16:31:33.136639] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.136644] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.136647] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf1ea60) 00:16:15.131 [2024-07-21 16:31:33.136654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.131 [2024-07-21 16:31:33.136670] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61cc0, cid 3, qid 0 00:16:15.131 [2024-07-21 16:31:33.136720] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.131 [2024-07-21 16:31:33.136727] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.131 [2024-07-21 16:31:33.136730] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.136734] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61cc0) on tqpair=0xf1ea60 00:16:15.131 [2024-07-21 16:31:33.136743] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.136748] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.136751] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf1ea60) 00:16:15.131 [2024-07-21 16:31:33.136758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.131 [2024-07-21 16:31:33.136790] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61cc0, cid 3, qid 0 00:16:15.131 [2024-07-21 16:31:33.136836] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.131 [2024-07-21 16:31:33.136842] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.131 [2024-07-21 16:31:33.136862] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.136866] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61cc0) on tqpair=0xf1ea60 00:16:15.131 [2024-07-21 16:31:33.136876] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.136880] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.136884] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf1ea60) 00:16:15.131 [2024-07-21 16:31:33.136908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.131 [2024-07-21 16:31:33.136925] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61cc0, cid 3, qid 0 00:16:15.131 [2024-07-21 16:31:33.136994] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.131 [2024-07-21 16:31:33.137001] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.131 [2024-07-21 16:31:33.137005] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.137009] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61cc0) on tqpair=0xf1ea60 00:16:15.131 [2024-07-21 16:31:33.137019] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.137024] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.137028] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf1ea60) 00:16:15.131 [2024-07-21 16:31:33.137036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.131 [2024-07-21 16:31:33.137053] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61cc0, cid 3, qid 0 00:16:15.131 [2024-07-21 16:31:33.137107] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.131 [2024-07-21 16:31:33.137114] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.131 [2024-07-21 16:31:33.137118] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.137122] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61cc0) on tqpair=0xf1ea60 00:16:15.131 [2024-07-21 16:31:33.137133] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.137137] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.137141] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf1ea60) 00:16:15.131 [2024-07-21 16:31:33.137149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.131 [2024-07-21 16:31:33.137167] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61cc0, cid 3, qid 0 00:16:15.131 [2024-07-21 16:31:33.137218] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.131 [2024-07-21 16:31:33.137225] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.131 [2024-07-21 16:31:33.137229] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.137234] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61cc0) on tqpair=0xf1ea60 00:16:15.131 [2024-07-21 16:31:33.137259] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.137264] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.131 [2024-07-21 16:31:33.137268] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf1ea60) 00:16:15.131 [2024-07-21 16:31:33.137275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.131 [2024-07-21 16:31:33.137292] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61cc0, cid 3, qid 0 00:16:15.131 [2024-07-21 16:31:33.137361] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.131 [2024-07-21 16:31:33.141303] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.132 [2024-07-21 16:31:33.141318] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.132 [2024-07-21 16:31:33.141323] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61cc0) on tqpair=0xf1ea60 00:16:15.132 [2024-07-21 16:31:33.141337] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.132 [2024-07-21 16:31:33.141342] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.132 [2024-07-21 16:31:33.141346] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf1ea60) 00:16:15.132 [2024-07-21 16:31:33.141354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.132 [2024-07-21 16:31:33.141380] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf61cc0, cid 3, qid 0 00:16:15.132 [2024-07-21 16:31:33.141437] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.132 [2024-07-21 16:31:33.141443] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.132 [2024-07-21 16:31:33.141446] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.132 [2024-07-21 16:31:33.141450] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf61cc0) on tqpair=0xf1ea60 00:16:15.132 [2024-07-21 16:31:33.141458] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:16:15.132 00:16:15.132 16:31:33 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:16:15.132 [2024-07-21 16:31:33.177458] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:16:15.132 [2024-07-21 16:31:33.177512] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86959 ] 00:16:15.132 [2024-07-21 16:31:33.312610] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:16:15.132 [2024-07-21 16:31:33.312679] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:15.132 [2024-07-21 16:31:33.312685] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:15.132 [2024-07-21 16:31:33.312700] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:15.132 [2024-07-21 16:31:33.312707] sock.c: 353:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:15.132 [2024-07-21 16:31:33.312806] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:16:15.132 [2024-07-21 16:31:33.312850] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x501a60 0 00:16:15.132 [2024-07-21 16:31:33.316292] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:15.132 [2024-07-21 16:31:33.316312] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:15.132 [2024-07-21 16:31:33.316317] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:15.132 [2024-07-21 16:31:33.316320] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:15.132 [2024-07-21 16:31:33.316365] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.132 [2024-07-21 16:31:33.316371] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.132 [2024-07-21 16:31:33.316375] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x501a60) 00:16:15.132 [2024-07-21 16:31:33.316386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:15.132 [2024-07-21 16:31:33.316413] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x544840, cid 0, qid 0 00:16:15.132 [2024-07-21 16:31:33.321283] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.132 [2024-07-21 16:31:33.321301] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.132 [2024-07-21 16:31:33.321306] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.132 [2024-07-21 16:31:33.321310] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x544840) on tqpair=0x501a60 00:16:15.132 [2024-07-21 16:31:33.321321] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:15.132 [2024-07-21 16:31:33.321328] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:16:15.132 [2024-07-21 16:31:33.321334] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:16:15.132 [2024-07-21 16:31:33.321350] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.132 [2024-07-21 16:31:33.321355] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.132 [2024-07-21 16:31:33.321359] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x501a60) 00:16:15.132 [2024-07-21 16:31:33.321368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.132 [2024-07-21 16:31:33.321394] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x544840, cid 0, qid 0 00:16:15.132 [2024-07-21 16:31:33.321455] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.132 [2024-07-21 16:31:33.321462] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.132 [2024-07-21 16:31:33.321465] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.132 [2024-07-21 16:31:33.321469] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x544840) on tqpair=0x501a60 00:16:15.132 [2024-07-21 16:31:33.321474] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:16:15.132 [2024-07-21 16:31:33.321481] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:16:15.132 [2024-07-21 16:31:33.321489] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.132 [2024-07-21 16:31:33.321493] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.132 [2024-07-21 16:31:33.321496] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x501a60) 00:16:15.132 [2024-07-21 16:31:33.321503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.132 [2024-07-21 16:31:33.321521] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x544840, cid 0, qid 0 00:16:15.132 [2024-07-21 16:31:33.321571] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.132 [2024-07-21 16:31:33.321577] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.132 [2024-07-21 16:31:33.321580] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.132 [2024-07-21 16:31:33.321584] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x544840) on tqpair=0x501a60 00:16:15.132 [2024-07-21 16:31:33.321589] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:16:15.132 [2024-07-21 16:31:33.321597] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:16:15.132 [2024-07-21 16:31:33.321604] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.132 [2024-07-21 16:31:33.321608] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.132 [2024-07-21 16:31:33.321612] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x501a60) 00:16:15.132 [2024-07-21 16:31:33.321619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.132 [2024-07-21 16:31:33.321636] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x544840, cid 0, qid 0 00:16:15.132 [2024-07-21 16:31:33.321685] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.132 [2024-07-21 16:31:33.321691] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.132 [2024-07-21 16:31:33.321694] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.132 [2024-07-21 16:31:33.321698] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x544840) on tqpair=0x501a60 00:16:15.132 [2024-07-21 16:31:33.321703] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:15.132 [2024-07-21 16:31:33.321713] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.132 [2024-07-21 16:31:33.321717] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.132 [2024-07-21 16:31:33.321721] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x501a60) 00:16:15.132 [2024-07-21 16:31:33.321728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.132 [2024-07-21 16:31:33.321744] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x544840, cid 0, qid 0 00:16:15.132 [2024-07-21 16:31:33.321794] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.132 [2024-07-21 16:31:33.321800] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.132 [2024-07-21 16:31:33.321803] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.132 [2024-07-21 16:31:33.321807] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x544840) on tqpair=0x501a60 00:16:15.132 [2024-07-21 16:31:33.321811] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:16:15.132 [2024-07-21 16:31:33.321816] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:16:15.132 [2024-07-21 16:31:33.321823] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:15.132 [2024-07-21 16:31:33.321928] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:16:15.132 [2024-07-21 16:31:33.321932] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:15.132 [2024-07-21 16:31:33.321940] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.132 [2024-07-21 16:31:33.321944] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.132 [2024-07-21 16:31:33.321948] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x501a60) 00:16:15.132 [2024-07-21 16:31:33.321955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.132 [2024-07-21 16:31:33.321973] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x544840, cid 0, qid 0 00:16:15.132 [2024-07-21 16:31:33.322022] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.132 [2024-07-21 16:31:33.322034] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.132 [2024-07-21 16:31:33.322038] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.132 [2024-07-21 16:31:33.322042] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x544840) on tqpair=0x501a60 00:16:15.132 [2024-07-21 16:31:33.322047] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:15.132 [2024-07-21 16:31:33.322057] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.132 [2024-07-21 16:31:33.322061] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.132 [2024-07-21 16:31:33.322065] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x501a60) 00:16:15.132 [2024-07-21 16:31:33.322072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.132 [2024-07-21 16:31:33.322089] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x544840, cid 0, qid 0 00:16:15.132 [2024-07-21 16:31:33.322155] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.132 [2024-07-21 16:31:33.322161] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.132 [2024-07-21 16:31:33.322165] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.132 [2024-07-21 16:31:33.322168] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x544840) on tqpair=0x501a60 00:16:15.132 [2024-07-21 16:31:33.322181] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:15.132 [2024-07-21 16:31:33.322187] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:16:15.133 [2024-07-21 16:31:33.322196] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:16:15.133 [2024-07-21 16:31:33.322206] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:16:15.133 [2024-07-21 16:31:33.322216] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.133 [2024-07-21 16:31:33.322220] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x501a60) 00:16:15.133 [2024-07-21 16:31:33.322227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.133 [2024-07-21 16:31:33.322247] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x544840, cid 0, qid 0 00:16:15.133 [2024-07-21 16:31:33.322350] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.133 [2024-07-21 16:31:33.322359] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.133 [2024-07-21 16:31:33.322362] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.133 [2024-07-21 16:31:33.322366] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x501a60): datao=0, datal=4096, cccid=0 00:16:15.133 [2024-07-21 16:31:33.322371] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x544840) on tqpair(0x501a60): expected_datao=0, payload_size=4096 00:16:15.133 [2024-07-21 16:31:33.322375] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.133 [2024-07-21 16:31:33.322382] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.133 [2024-07-21 16:31:33.322386] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.133 [2024-07-21 16:31:33.322394] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.133 [2024-07-21 16:31:33.322400] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.133 [2024-07-21 16:31:33.322403] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.133 [2024-07-21 16:31:33.322407] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x544840) on tqpair=0x501a60 00:16:15.133 [2024-07-21 16:31:33.322415] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:16:15.133 [2024-07-21 16:31:33.322420] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:16:15.133 [2024-07-21 16:31:33.322424] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:16:15.133 [2024-07-21 16:31:33.322429] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:16:15.133 [2024-07-21 16:31:33.322433] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:16:15.133 [2024-07-21 16:31:33.322438] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:16:15.133 [2024-07-21 16:31:33.322462] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:16:15.133 [2024-07-21 16:31:33.322469] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.133 [2024-07-21 16:31:33.322473] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.133 [2024-07-21 16:31:33.322477] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x501a60) 00:16:15.133 [2024-07-21 16:31:33.322484] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:15.133 [2024-07-21 16:31:33.322505] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x544840, cid 0, qid 0 00:16:15.133 [2024-07-21 16:31:33.322563] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.133 [2024-07-21 16:31:33.322569] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.133 [2024-07-21 16:31:33.322573] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.133 [2024-07-21 16:31:33.322577] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x544840) on tqpair=0x501a60 00:16:15.133 [2024-07-21 16:31:33.322584] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.133 [2024-07-21 16:31:33.322588] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.133 [2024-07-21 16:31:33.322607] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x501a60) 00:16:15.133 [2024-07-21 16:31:33.322613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.133 [2024-07-21 16:31:33.322619] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.133 [2024-07-21 16:31:33.322623] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.133 [2024-07-21 16:31:33.322626] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x501a60) 00:16:15.133 [2024-07-21 16:31:33.322632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.133 [2024-07-21 16:31:33.322637] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.133 [2024-07-21 16:31:33.322640] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.133 [2024-07-21 16:31:33.322644] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x501a60) 00:16:15.133 [2024-07-21 16:31:33.322649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.133 [2024-07-21 16:31:33.322655] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.133 [2024-07-21 16:31:33.322658] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.133 [2024-07-21 16:31:33.322661] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x501a60) 00:16:15.133 [2024-07-21 16:31:33.322667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.133 [2024-07-21 16:31:33.322671] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:15.133 [2024-07-21 16:31:33.322683] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:15.133 [2024-07-21 16:31:33.322690] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.133 [2024-07-21 16:31:33.322694] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x501a60) 00:16:15.133 [2024-07-21 16:31:33.322700] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.133 [2024-07-21 16:31:33.322719] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x544840, cid 0, qid 0 00:16:15.133 [2024-07-21 16:31:33.322725] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5449c0, cid 1, qid 0 00:16:15.133 [2024-07-21 16:31:33.322729] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x544b40, cid 2, qid 0 00:16:15.133 [2024-07-21 16:31:33.322733] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x544cc0, cid 3, qid 0 00:16:15.133 [2024-07-21 16:31:33.322738] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x544e40, cid 4, qid 0 00:16:15.133 [2024-07-21 16:31:33.322821] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.133 [2024-07-21 16:31:33.322828] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.133 [2024-07-21 16:31:33.322831] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.133 [2024-07-21 16:31:33.322835] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x544e40) on tqpair=0x501a60 00:16:15.133 [2024-07-21 16:31:33.322839] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:16:15.133 [2024-07-21 16:31:33.322848] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:15.133 [2024-07-21 16:31:33.322857] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:16:15.133 [2024-07-21 16:31:33.322863] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:15.133 [2024-07-21 16:31:33.322870] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.133 [2024-07-21 16:31:33.322874] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.133 [2024-07-21 16:31:33.322877] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x501a60) 00:16:15.133 [2024-07-21 16:31:33.322884] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:15.133 [2024-07-21 16:31:33.322901] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x544e40, cid 4, qid 0 00:16:15.133 [2024-07-21 16:31:33.322958] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.133 [2024-07-21 16:31:33.322964] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.133 [2024-07-21 16:31:33.322967] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.133 [2024-07-21 16:31:33.322986] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x544e40) on tqpair=0x501a60 00:16:15.133 [2024-07-21 16:31:33.323045] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:16:15.133 [2024-07-21 16:31:33.323079] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:15.133 [2024-07-21 16:31:33.323089] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.133 [2024-07-21 16:31:33.323093] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x501a60) 00:16:15.133 [2024-07-21 16:31:33.323100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.133 [2024-07-21 16:31:33.323120] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x544e40, cid 4, qid 0 00:16:15.133 [2024-07-21 16:31:33.323185] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.133 [2024-07-21 16:31:33.323196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.133 [2024-07-21 16:31:33.323200] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.133 [2024-07-21 16:31:33.323204] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x501a60): datao=0, datal=4096, cccid=4 00:16:15.134 [2024-07-21 16:31:33.323208] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x544e40) on tqpair(0x501a60): expected_datao=0, payload_size=4096 00:16:15.134 [2024-07-21 16:31:33.323213] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.323220] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.323224] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.323232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.134 [2024-07-21 16:31:33.323238] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.134 [2024-07-21 16:31:33.323241] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.323245] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x544e40) on tqpair=0x501a60 00:16:15.134 [2024-07-21 16:31:33.323272] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:16:15.134 [2024-07-21 16:31:33.323286] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:16:15.134 [2024-07-21 16:31:33.323297] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:16:15.134 [2024-07-21 16:31:33.323305] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.323309] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x501a60) 00:16:15.134 [2024-07-21 16:31:33.323316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.134 [2024-07-21 16:31:33.323337] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x544e40, cid 4, qid 0 00:16:15.134 [2024-07-21 16:31:33.323425] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.134 [2024-07-21 16:31:33.323432] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.134 [2024-07-21 16:31:33.323435] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.323439] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x501a60): datao=0, datal=4096, cccid=4 00:16:15.134 [2024-07-21 16:31:33.323443] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x544e40) on tqpair(0x501a60): expected_datao=0, payload_size=4096 00:16:15.134 [2024-07-21 16:31:33.323448] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.323462] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.323466] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.323474] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.134 [2024-07-21 16:31:33.323479] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.134 [2024-07-21 16:31:33.323483] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.323487] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x544e40) on tqpair=0x501a60 00:16:15.134 [2024-07-21 16:31:33.323502] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:15.134 [2024-07-21 16:31:33.323513] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:15.134 [2024-07-21 16:31:33.323521] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.323525] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x501a60) 00:16:15.134 [2024-07-21 16:31:33.323532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.134 [2024-07-21 16:31:33.323551] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x544e40, cid 4, qid 0 00:16:15.134 [2024-07-21 16:31:33.323616] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.134 [2024-07-21 16:31:33.323622] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.134 [2024-07-21 16:31:33.323625] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.323629] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x501a60): datao=0, datal=4096, cccid=4 00:16:15.134 [2024-07-21 16:31:33.323634] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x544e40) on tqpair(0x501a60): expected_datao=0, payload_size=4096 00:16:15.134 [2024-07-21 16:31:33.323638] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.323645] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.323648] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.323656] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.134 [2024-07-21 16:31:33.323661] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.134 [2024-07-21 16:31:33.323665] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.323669] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x544e40) on tqpair=0x501a60 00:16:15.134 [2024-07-21 16:31:33.323678] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:15.134 [2024-07-21 16:31:33.323686] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:16:15.134 [2024-07-21 16:31:33.323697] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:16:15.134 [2024-07-21 16:31:33.323704] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:15.134 [2024-07-21 16:31:33.323709] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:15.134 [2024-07-21 16:31:33.323715] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:16:15.134 [2024-07-21 16:31:33.323720] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:16:15.134 [2024-07-21 16:31:33.323725] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:16:15.134 [2024-07-21 16:31:33.323730] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:16:15.134 [2024-07-21 16:31:33.323759] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.323763] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x501a60) 00:16:15.134 [2024-07-21 16:31:33.323770] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.134 [2024-07-21 16:31:33.323776] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.323780] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.323783] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x501a60) 00:16:15.134 [2024-07-21 16:31:33.323789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:15.134 [2024-07-21 16:31:33.323812] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x544e40, cid 4, qid 0 00:16:15.134 [2024-07-21 16:31:33.323819] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x544fc0, cid 5, qid 0 00:16:15.134 [2024-07-21 16:31:33.323885] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.134 [2024-07-21 16:31:33.323891] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.134 [2024-07-21 16:31:33.323894] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.323898] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x544e40) on tqpair=0x501a60 00:16:15.134 [2024-07-21 16:31:33.323905] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.134 [2024-07-21 16:31:33.323910] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.134 [2024-07-21 16:31:33.323913] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.323917] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x544fc0) on tqpair=0x501a60 00:16:15.134 [2024-07-21 16:31:33.323926] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.323930] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x501a60) 00:16:15.134 [2024-07-21 16:31:33.323937] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.134 [2024-07-21 16:31:33.323954] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x544fc0, cid 5, qid 0 00:16:15.134 [2024-07-21 16:31:33.324006] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.134 [2024-07-21 16:31:33.324017] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.134 [2024-07-21 16:31:33.324021] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.324025] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x544fc0) on tqpair=0x501a60 00:16:15.134 [2024-07-21 16:31:33.324035] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.324040] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x501a60) 00:16:15.134 [2024-07-21 16:31:33.324046] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.134 [2024-07-21 16:31:33.324080] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x544fc0, cid 5, qid 0 00:16:15.134 [2024-07-21 16:31:33.324138] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.134 [2024-07-21 16:31:33.324144] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.134 [2024-07-21 16:31:33.324148] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.324152] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x544fc0) on tqpair=0x501a60 00:16:15.134 [2024-07-21 16:31:33.324162] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.324166] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x501a60) 00:16:15.134 [2024-07-21 16:31:33.324173] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.134 [2024-07-21 16:31:33.324189] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x544fc0, cid 5, qid 0 00:16:15.134 [2024-07-21 16:31:33.324239] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.134 [2024-07-21 16:31:33.324246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.134 [2024-07-21 16:31:33.324250] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.324253] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x544fc0) on tqpair=0x501a60 00:16:15.134 [2024-07-21 16:31:33.324271] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.324292] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x501a60) 00:16:15.134 [2024-07-21 16:31:33.324300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.134 [2024-07-21 16:31:33.324308] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.324312] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x501a60) 00:16:15.134 [2024-07-21 16:31:33.324318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.134 [2024-07-21 16:31:33.324325] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.324329] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x501a60) 00:16:15.134 [2024-07-21 16:31:33.324335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.134 [2024-07-21 16:31:33.324346] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.134 [2024-07-21 16:31:33.324350] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x501a60) 00:16:15.135 [2024-07-21 16:31:33.324356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.135 [2024-07-21 16:31:33.324378] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x544fc0, cid 5, qid 0 00:16:15.135 [2024-07-21 16:31:33.324385] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x544e40, cid 4, qid 0 00:16:15.135 [2024-07-21 16:31:33.324389] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x545140, cid 6, qid 0 00:16:15.135 [2024-07-21 16:31:33.324394] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5452c0, cid 7, qid 0 00:16:15.135 [2024-07-21 16:31:33.324528] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.135 [2024-07-21 16:31:33.324534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.135 [2024-07-21 16:31:33.324538] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.135 [2024-07-21 16:31:33.324542] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x501a60): datao=0, datal=8192, cccid=5 00:16:15.135 [2024-07-21 16:31:33.324546] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x544fc0) on tqpair(0x501a60): expected_datao=0, payload_size=8192 00:16:15.135 [2024-07-21 16:31:33.324551] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.135 [2024-07-21 16:31:33.324566] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.135 [2024-07-21 16:31:33.324570] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.135 [2024-07-21 16:31:33.324577] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.135 [2024-07-21 16:31:33.324582] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.135 [2024-07-21 16:31:33.324586] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.135 [2024-07-21 16:31:33.324589] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x501a60): datao=0, datal=512, cccid=4 00:16:15.135 [2024-07-21 16:31:33.324595] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x544e40) on tqpair(0x501a60): expected_datao=0, payload_size=512 00:16:15.135 [2024-07-21 16:31:33.324599] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.135 [2024-07-21 16:31:33.324605] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.135 [2024-07-21 16:31:33.324609] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.135 [2024-07-21 16:31:33.324614] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.135 [2024-07-21 16:31:33.324620] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.135 [2024-07-21 16:31:33.324623] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.135 [2024-07-21 16:31:33.324627] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x501a60): datao=0, datal=512, cccid=6 00:16:15.135 [2024-07-21 16:31:33.324632] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x545140) on tqpair(0x501a60): expected_datao=0, payload_size=512 00:16:15.135 [2024-07-21 16:31:33.324636] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.135 [2024-07-21 16:31:33.324642] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.135 [2024-07-21 16:31:33.324645] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.135 [2024-07-21 16:31:33.324651] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:15.135 [2024-07-21 16:31:33.324656] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:15.135 [2024-07-21 16:31:33.324659] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:15.135 [2024-07-21 16:31:33.324663] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x501a60): datao=0, datal=4096, cccid=7 00:16:15.135 [2024-07-21 16:31:33.324667] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5452c0) on tqpair(0x501a60): expected_datao=0, payload_size=4096 00:16:15.135 [2024-07-21 16:31:33.324671] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.135 [2024-07-21 16:31:33.324692] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:15.135 [2024-07-21 16:31:33.324696] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:15.135 [2024-07-21 16:31:33.324703] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.135 [2024-07-21 16:31:33.324708] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.135 [2024-07-21 16:31:33.324711] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.135 [2024-07-21 16:31:33.324715] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x544fc0) on tqpair=0x501a60 00:16:15.135 [2024-07-21 16:31:33.324732] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.135 [2024-07-21 16:31:33.324738] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.135 [2024-07-21 16:31:33.324741] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.135 [2024-07-21 16:31:33.324745] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x544e40) on tqpair=0x501a60 00:16:15.135 [2024-07-21 16:31:33.324757] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.135 [2024-07-21 16:31:33.324763] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.135 [2024-07-21 16:31:33.324767] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.135 [2024-07-21 16:31:33.324770] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x545140) on tqpair=0x501a60 00:16:15.135 ===================================================== 00:16:15.135 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:15.135 ===================================================== 00:16:15.135 Controller Capabilities/Features 00:16:15.135 ================================ 00:16:15.135 Vendor ID: 8086 00:16:15.135 Subsystem Vendor ID: 8086 00:16:15.135 Serial Number: SPDK00000000000001 00:16:15.135 Model Number: SPDK bdev Controller 00:16:15.135 Firmware Version: 24.09 00:16:15.135 Recommended Arb Burst: 6 00:16:15.135 IEEE OUI Identifier: e4 d2 5c 00:16:15.135 Multi-path I/O 00:16:15.135 May have multiple subsystem ports: Yes 00:16:15.135 May have multiple controllers: Yes 00:16:15.135 Associated with SR-IOV VF: No 00:16:15.135 Max Data Transfer Size: 131072 00:16:15.135 Max Number of Namespaces: 32 00:16:15.135 Max Number of I/O Queues: 127 00:16:15.135 NVMe Specification Version (VS): 1.3 00:16:15.135 NVMe Specification Version (Identify): 1.3 00:16:15.135 Maximum Queue Entries: 128 00:16:15.135 Contiguous Queues Required: Yes 00:16:15.135 Arbitration Mechanisms Supported 00:16:15.135 Weighted Round Robin: Not Supported 00:16:15.135 Vendor Specific: Not Supported 00:16:15.135 Reset Timeout: 15000 ms 00:16:15.135 Doorbell Stride: 4 bytes 00:16:15.135 NVM Subsystem Reset: Not Supported 00:16:15.135 Command Sets Supported 00:16:15.135 NVM Command Set: Supported 00:16:15.135 Boot Partition: Not Supported 00:16:15.135 Memory Page Size Minimum: 4096 bytes 00:16:15.135 Memory Page Size Maximum: 4096 bytes 00:16:15.135 Persistent Memory Region: Not Supported 00:16:15.135 Optional Asynchronous Events Supported 00:16:15.135 Namespace Attribute Notices: Supported 00:16:15.135 Firmware Activation Notices: Not Supported 00:16:15.135 ANA Change Notices: Not Supported 00:16:15.135 PLE Aggregate Log Change Notices: Not Supported 00:16:15.135 LBA Status Info Alert Notices: Not Supported 00:16:15.135 EGE Aggregate Log Change Notices: Not Supported 00:16:15.135 Normal NVM Subsystem Shutdown event: Not Supported 00:16:15.135 Zone Descriptor Change Notices: Not Supported 00:16:15.135 Discovery Log Change Notices: Not Supported 00:16:15.135 Controller Attributes 00:16:15.135 128-bit Host Identifier: Supported 00:16:15.135 Non-Operational Permissive Mode: Not Supported 00:16:15.135 NVM Sets: Not Supported 00:16:15.135 Read Recovery Levels: Not Supported 00:16:15.135 Endurance Groups: Not Supported 00:16:15.135 Predictable Latency Mode: Not Supported 00:16:15.135 Traffic Based Keep ALive: Not Supported 00:16:15.135 Namespace Granularity: Not Supported 00:16:15.135 SQ Associations: Not Supported 00:16:15.135 UUID List: Not Supported 00:16:15.135 Multi-Domain Subsystem: Not Supported 00:16:15.135 Fixed Capacity Management: Not Supported 00:16:15.135 Variable Capacity Management: Not Supported 00:16:15.135 Delete Endurance Group: Not Supported 00:16:15.135 Delete NVM Set: Not Supported 00:16:15.135 Extended LBA Formats Supported: Not Supported 00:16:15.135 Flexible Data Placement Supported: Not Supported 00:16:15.135 00:16:15.135 Controller Memory Buffer Support 00:16:15.135 ================================ 00:16:15.135 Supported: No 00:16:15.135 00:16:15.135 Persistent Memory Region Support 00:16:15.135 ================================ 00:16:15.135 Supported: No 00:16:15.135 00:16:15.135 Admin Command Set Attributes 00:16:15.135 ============================ 00:16:15.135 Security Send/Receive: Not Supported 00:16:15.135 Format NVM: Not Supported 00:16:15.135 Firmware Activate/Download: Not Supported 00:16:15.135 Namespace Management: Not Supported 00:16:15.135 Device Self-Test: Not Supported 00:16:15.135 Directives: Not Supported 00:16:15.135 NVMe-MI: Not Supported 00:16:15.135 Virtualization Management: Not Supported 00:16:15.135 Doorbell Buffer Config: Not Supported 00:16:15.135 Get LBA Status Capability: Not Supported 00:16:15.135 Command & Feature Lockdown Capability: Not Supported 00:16:15.135 Abort Command Limit: 4 00:16:15.135 Async Event Request Limit: 4 00:16:15.135 Number of Firmware Slots: N/A 00:16:15.135 Firmware Slot 1 Read-Only: N/A 00:16:15.135 Firmware Activation Without Reset: [2024-07-21 16:31:33.324777] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.135 [2024-07-21 16:31:33.324783] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.135 [2024-07-21 16:31:33.324786] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.135 [2024-07-21 16:31:33.324790] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5452c0) on tqpair=0x501a60 00:16:15.135 N/A 00:16:15.135 Multiple Update Detection Support: N/A 00:16:15.135 Firmware Update Granularity: No Information Provided 00:16:15.135 Per-Namespace SMART Log: No 00:16:15.135 Asymmetric Namespace Access Log Page: Not Supported 00:16:15.135 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:16:15.135 Command Effects Log Page: Supported 00:16:15.135 Get Log Page Extended Data: Supported 00:16:15.135 Telemetry Log Pages: Not Supported 00:16:15.135 Persistent Event Log Pages: Not Supported 00:16:15.135 Supported Log Pages Log Page: May Support 00:16:15.135 Commands Supported & Effects Log Page: Not Supported 00:16:15.135 Feature Identifiers & Effects Log Page:May Support 00:16:15.135 NVMe-MI Commands & Effects Log Page: May Support 00:16:15.135 Data Area 4 for Telemetry Log: Not Supported 00:16:15.135 Error Log Page Entries Supported: 128 00:16:15.135 Keep Alive: Supported 00:16:15.135 Keep Alive Granularity: 10000 ms 00:16:15.135 00:16:15.135 NVM Command Set Attributes 00:16:15.135 ========================== 00:16:15.135 Submission Queue Entry Size 00:16:15.135 Max: 64 00:16:15.135 Min: 64 00:16:15.136 Completion Queue Entry Size 00:16:15.136 Max: 16 00:16:15.136 Min: 16 00:16:15.136 Number of Namespaces: 32 00:16:15.136 Compare Command: Supported 00:16:15.136 Write Uncorrectable Command: Not Supported 00:16:15.136 Dataset Management Command: Supported 00:16:15.136 Write Zeroes Command: Supported 00:16:15.136 Set Features Save Field: Not Supported 00:16:15.136 Reservations: Supported 00:16:15.136 Timestamp: Not Supported 00:16:15.136 Copy: Supported 00:16:15.136 Volatile Write Cache: Present 00:16:15.136 Atomic Write Unit (Normal): 1 00:16:15.136 Atomic Write Unit (PFail): 1 00:16:15.136 Atomic Compare & Write Unit: 1 00:16:15.136 Fused Compare & Write: Supported 00:16:15.136 Scatter-Gather List 00:16:15.136 SGL Command Set: Supported 00:16:15.136 SGL Keyed: Supported 00:16:15.136 SGL Bit Bucket Descriptor: Not Supported 00:16:15.136 SGL Metadata Pointer: Not Supported 00:16:15.136 Oversized SGL: Not Supported 00:16:15.136 SGL Metadata Address: Not Supported 00:16:15.136 SGL Offset: Supported 00:16:15.136 Transport SGL Data Block: Not Supported 00:16:15.136 Replay Protected Memory Block: Not Supported 00:16:15.136 00:16:15.136 Firmware Slot Information 00:16:15.136 ========================= 00:16:15.136 Active slot: 1 00:16:15.136 Slot 1 Firmware Revision: 24.09 00:16:15.136 00:16:15.136 00:16:15.136 Commands Supported and Effects 00:16:15.136 ============================== 00:16:15.136 Admin Commands 00:16:15.136 -------------- 00:16:15.136 Get Log Page (02h): Supported 00:16:15.136 Identify (06h): Supported 00:16:15.136 Abort (08h): Supported 00:16:15.136 Set Features (09h): Supported 00:16:15.136 Get Features (0Ah): Supported 00:16:15.136 Asynchronous Event Request (0Ch): Supported 00:16:15.136 Keep Alive (18h): Supported 00:16:15.136 I/O Commands 00:16:15.136 ------------ 00:16:15.136 Flush (00h): Supported LBA-Change 00:16:15.136 Write (01h): Supported LBA-Change 00:16:15.136 Read (02h): Supported 00:16:15.136 Compare (05h): Supported 00:16:15.136 Write Zeroes (08h): Supported LBA-Change 00:16:15.136 Dataset Management (09h): Supported LBA-Change 00:16:15.136 Copy (19h): Supported LBA-Change 00:16:15.136 00:16:15.136 Error Log 00:16:15.136 ========= 00:16:15.136 00:16:15.136 Arbitration 00:16:15.136 =========== 00:16:15.136 Arbitration Burst: 1 00:16:15.136 00:16:15.136 Power Management 00:16:15.136 ================ 00:16:15.136 Number of Power States: 1 00:16:15.136 Current Power State: Power State #0 00:16:15.136 Power State #0: 00:16:15.136 Max Power: 0.00 W 00:16:15.136 Non-Operational State: Operational 00:16:15.136 Entry Latency: Not Reported 00:16:15.136 Exit Latency: Not Reported 00:16:15.136 Relative Read Throughput: 0 00:16:15.136 Relative Read Latency: 0 00:16:15.136 Relative Write Throughput: 0 00:16:15.136 Relative Write Latency: 0 00:16:15.136 Idle Power: Not Reported 00:16:15.136 Active Power: Not Reported 00:16:15.136 Non-Operational Permissive Mode: Not Supported 00:16:15.136 00:16:15.136 Health Information 00:16:15.136 ================== 00:16:15.136 Critical Warnings: 00:16:15.136 Available Spare Space: OK 00:16:15.136 Temperature: OK 00:16:15.136 Device Reliability: OK 00:16:15.136 Read Only: No 00:16:15.136 Volatile Memory Backup: OK 00:16:15.136 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:15.136 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:15.136 Available Spare: 0% 00:16:15.136 Available Spare Threshold: 0% 00:16:15.136 Life Percentage Used:[2024-07-21 16:31:33.324913] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.136 [2024-07-21 16:31:33.324920] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x501a60) 00:16:15.136 [2024-07-21 16:31:33.324927] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.136 [2024-07-21 16:31:33.324948] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5452c0, cid 7, qid 0 00:16:15.136 [2024-07-21 16:31:33.325005] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.136 [2024-07-21 16:31:33.325011] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.136 [2024-07-21 16:31:33.325015] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.136 [2024-07-21 16:31:33.325018] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5452c0) on tqpair=0x501a60 00:16:15.136 [2024-07-21 16:31:33.325054] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:16:15.136 [2024-07-21 16:31:33.325064] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x544840) on tqpair=0x501a60 00:16:15.136 [2024-07-21 16:31:33.325070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.136 [2024-07-21 16:31:33.325075] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5449c0) on tqpair=0x501a60 00:16:15.136 [2024-07-21 16:31:33.325079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.136 [2024-07-21 16:31:33.325083] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x544b40) on tqpair=0x501a60 00:16:15.136 [2024-07-21 16:31:33.325087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.136 [2024-07-21 16:31:33.325091] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x544cc0) on tqpair=0x501a60 00:16:15.136 [2024-07-21 16:31:33.325095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:15.136 [2024-07-21 16:31:33.325103] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.136 [2024-07-21 16:31:33.325107] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.136 [2024-07-21 16:31:33.325110] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x501a60) 00:16:15.136 [2024-07-21 16:31:33.325117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.136 [2024-07-21 16:31:33.325136] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x544cc0, cid 3, qid 0 00:16:15.136 [2024-07-21 16:31:33.325182] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.136 [2024-07-21 16:31:33.325188] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.136 [2024-07-21 16:31:33.325191] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.136 [2024-07-21 16:31:33.325195] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x544cc0) on tqpair=0x501a60 00:16:15.136 [2024-07-21 16:31:33.325201] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.136 [2024-07-21 16:31:33.325205] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.136 [2024-07-21 16:31:33.325209] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x501a60) 00:16:15.136 [2024-07-21 16:31:33.325215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.136 [2024-07-21 16:31:33.325234] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x544cc0, cid 3, qid 0 00:16:15.136 [2024-07-21 16:31:33.329276] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.136 [2024-07-21 16:31:33.329293] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.136 [2024-07-21 16:31:33.329297] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.136 [2024-07-21 16:31:33.329301] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x544cc0) on tqpair=0x501a60 00:16:15.136 [2024-07-21 16:31:33.329306] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:16:15.136 [2024-07-21 16:31:33.329310] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:16:15.136 [2024-07-21 16:31:33.329322] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:15.136 [2024-07-21 16:31:33.329327] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:15.136 [2024-07-21 16:31:33.329330] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x501a60) 00:16:15.136 [2024-07-21 16:31:33.329338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.136 [2024-07-21 16:31:33.329361] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x544cc0, cid 3, qid 0 00:16:15.136 [2024-07-21 16:31:33.329415] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:15.136 [2024-07-21 16:31:33.329421] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:15.136 [2024-07-21 16:31:33.329424] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:15.136 [2024-07-21 16:31:33.329428] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x544cc0) on tqpair=0x501a60 00:16:15.136 [2024-07-21 16:31:33.329436] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:16:15.394 0% 00:16:15.394 Data Units Read: 0 00:16:15.394 Data Units Written: 0 00:16:15.394 Host Read Commands: 0 00:16:15.394 Host Write Commands: 0 00:16:15.394 Controller Busy Time: 0 minutes 00:16:15.394 Power Cycles: 0 00:16:15.394 Power On Hours: 0 hours 00:16:15.394 Unsafe Shutdowns: 0 00:16:15.394 Unrecoverable Media Errors: 0 00:16:15.394 Lifetime Error Log Entries: 0 00:16:15.394 Warning Temperature Time: 0 minutes 00:16:15.394 Critical Temperature Time: 0 minutes 00:16:15.394 00:16:15.394 Number of Queues 00:16:15.394 ================ 00:16:15.394 Number of I/O Submission Queues: 127 00:16:15.394 Number of I/O Completion Queues: 127 00:16:15.394 00:16:15.394 Active Namespaces 00:16:15.394 ================= 00:16:15.394 Namespace ID:1 00:16:15.394 Error Recovery Timeout: Unlimited 00:16:15.394 Command Set Identifier: NVM (00h) 00:16:15.394 Deallocate: Supported 00:16:15.394 Deallocated/Unwritten Error: Not Supported 00:16:15.394 Deallocated Read Value: Unknown 00:16:15.394 Deallocate in Write Zeroes: Not Supported 00:16:15.394 Deallocated Guard Field: 0xFFFF 00:16:15.394 Flush: Supported 00:16:15.394 Reservation: Supported 00:16:15.394 Namespace Sharing Capabilities: Multiple Controllers 00:16:15.394 Size (in LBAs): 131072 (0GiB) 00:16:15.394 Capacity (in LBAs): 131072 (0GiB) 00:16:15.394 Utilization (in LBAs): 131072 (0GiB) 00:16:15.394 NGUID: ABCDEF0123456789ABCDEF0123456789 00:16:15.394 EUI64: ABCDEF0123456789 00:16:15.394 UUID: 371d8c15-f2a2-4174-aa37-db6d9dd1199b 00:16:15.394 Thin Provisioning: Not Supported 00:16:15.394 Per-NS Atomic Units: Yes 00:16:15.394 Atomic Boundary Size (Normal): 0 00:16:15.394 Atomic Boundary Size (PFail): 0 00:16:15.394 Atomic Boundary Offset: 0 00:16:15.394 Maximum Single Source Range Length: 65535 00:16:15.394 Maximum Copy Length: 65535 00:16:15.394 Maximum Source Range Count: 1 00:16:15.394 NGUID/EUI64 Never Reused: No 00:16:15.394 Namespace Write Protected: No 00:16:15.394 Number of LBA Formats: 1 00:16:15.394 Current LBA Format: LBA Format #00 00:16:15.394 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:15.394 00:16:15.394 16:31:33 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:16:15.394 16:31:33 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:15.394 16:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.394 16:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:15.394 16:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.394 16:31:33 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:16:15.394 16:31:33 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:16:15.394 16:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:15.394 16:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:16:15.394 16:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:15.394 16:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:16:15.394 16:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:15.394 16:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:15.394 rmmod nvme_tcp 00:16:15.394 rmmod nvme_fabrics 00:16:15.394 rmmod nvme_keyring 00:16:15.394 16:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:15.394 16:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:16:15.394 16:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:16:15.394 16:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 86898 ']' 00:16:15.394 16:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 86898 00:16:15.394 16:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 86898 ']' 00:16:15.394 16:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 86898 00:16:15.394 16:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:16:15.394 16:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:15.394 16:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86898 00:16:15.394 killing process with pid 86898 00:16:15.394 16:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:15.394 16:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:15.394 16:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86898' 00:16:15.394 16:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 86898 00:16:15.394 16:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 86898 00:16:15.652 16:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:15.652 16:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:15.652 16:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:15.652 16:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:15.652 16:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:15.652 16:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.652 16:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.652 16:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.910 16:31:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:15.910 00:16:15.910 real 0m2.712s 00:16:15.910 user 0m7.374s 00:16:15.910 sys 0m0.721s 00:16:15.910 16:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:15.910 ************************************ 00:16:15.910 END TEST nvmf_identify 00:16:15.910 ************************************ 00:16:15.910 16:31:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:15.910 16:31:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:15.910 16:31:33 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:15.910 16:31:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:15.910 16:31:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:15.910 16:31:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:15.910 ************************************ 00:16:15.910 START TEST nvmf_perf 00:16:15.910 ************************************ 00:16:15.910 16:31:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:15.910 * Looking for test storage... 00:16:15.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:15.910 16:31:34 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:15.910 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:16:15.910 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:15.911 Cannot find device "nvmf_tgt_br" 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:15.911 Cannot find device "nvmf_tgt_br2" 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:15.911 Cannot find device "nvmf_tgt_br" 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:15.911 Cannot find device "nvmf_tgt_br2" 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:16:15.911 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:16.169 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:16.169 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:16.169 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:16.169 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:16:16.169 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:16.169 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:16.169 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:16:16.169 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:16.169 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:16.169 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:16.169 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:16.169 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:16.169 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:16.169 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:16.169 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:16.169 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:16.169 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:16.169 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:16.170 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:16.170 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:16.170 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:16.170 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:16.170 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:16.170 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:16.170 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:16.170 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:16.170 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:16.428 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:16.428 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:16.428 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:16.428 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:16.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:16.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:16:16.428 00:16:16.428 --- 10.0.0.2 ping statistics --- 00:16:16.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.428 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:16:16.428 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:16.428 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:16.428 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:16:16.428 00:16:16.428 --- 10.0.0.3 ping statistics --- 00:16:16.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.428 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:16:16.428 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:16.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:16.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:16:16.428 00:16:16.428 --- 10.0.0.1 ping statistics --- 00:16:16.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.428 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:16:16.428 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:16.428 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:16:16.428 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:16.428 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:16.428 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:16.428 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:16.428 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:16.428 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:16.428 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:16.428 16:31:34 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:16:16.428 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:16.428 16:31:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:16.428 16:31:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:16.428 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=87124 00:16:16.428 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 87124 00:16:16.428 16:31:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:16.428 16:31:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 87124 ']' 00:16:16.428 16:31:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.428 16:31:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:16.428 16:31:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.428 16:31:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:16.428 16:31:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:16.428 [2024-07-21 16:31:34.517306] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:16:16.428 [2024-07-21 16:31:34.517402] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.687 [2024-07-21 16:31:34.661731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:16.687 [2024-07-21 16:31:34.773289] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:16.687 [2024-07-21 16:31:34.773924] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:16.687 [2024-07-21 16:31:34.774233] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:16.687 [2024-07-21 16:31:34.774751] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:16.687 [2024-07-21 16:31:34.775048] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:16.687 [2024-07-21 16:31:34.775414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.687 [2024-07-21 16:31:34.775527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:16.687 [2024-07-21 16:31:34.776222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:16.687 [2024-07-21 16:31:34.776239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.618 16:31:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:17.618 16:31:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:16:17.618 16:31:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:17.618 16:31:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:17.618 16:31:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:17.618 16:31:35 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.618 16:31:35 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:17.618 16:31:35 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:16:17.874 16:31:35 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:16:17.875 16:31:35 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:16:18.131 16:31:36 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:16:18.131 16:31:36 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:18.388 16:31:36 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:16:18.388 16:31:36 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:16:18.388 16:31:36 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:16:18.388 16:31:36 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:16:18.388 16:31:36 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:18.645 [2024-07-21 16:31:36.753556] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.645 16:31:36 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:18.903 16:31:36 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:18.903 16:31:36 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:19.159 16:31:37 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:19.159 16:31:37 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:16:19.417 16:31:37 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:19.674 [2024-07-21 16:31:37.679148] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:19.674 16:31:37 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:19.930 16:31:37 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:16:19.930 16:31:37 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:19.931 16:31:37 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:16:19.931 16:31:37 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:20.863 Initializing NVMe Controllers 00:16:20.863 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:20.863 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:16:20.863 Initialization complete. Launching workers. 00:16:20.863 ======================================================== 00:16:20.863 Latency(us) 00:16:20.863 Device Information : IOPS MiB/s Average min max 00:16:20.863 PCIE (0000:00:10.0) NSID 1 from core 0: 23613.95 92.24 1355.41 398.16 8849.54 00:16:20.863 ======================================================== 00:16:20.863 Total : 23613.95 92.24 1355.41 398.16 8849.54 00:16:20.863 00:16:20.863 16:31:39 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:22.235 Initializing NVMe Controllers 00:16:22.235 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:22.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:22.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:22.235 Initialization complete. Launching workers. 00:16:22.235 ======================================================== 00:16:22.235 Latency(us) 00:16:22.235 Device Information : IOPS MiB/s Average min max 00:16:22.235 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3019.87 11.80 330.87 116.15 5068.66 00:16:22.235 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.99 0.48 8128.11 6977.42 12043.40 00:16:22.235 ======================================================== 00:16:22.235 Total : 3143.86 12.28 638.40 116.15 12043.40 00:16:22.235 00:16:22.235 16:31:40 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:23.607 Initializing NVMe Controllers 00:16:23.607 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:23.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:23.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:23.607 Initialization complete. Launching workers. 00:16:23.607 ======================================================== 00:16:23.607 Latency(us) 00:16:23.607 Device Information : IOPS MiB/s Average min max 00:16:23.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9214.25 35.99 3473.37 615.38 7800.69 00:16:23.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2687.32 10.50 12020.59 7310.06 20112.62 00:16:23.607 ======================================================== 00:16:23.607 Total : 11901.57 46.49 5403.29 615.38 20112.62 00:16:23.607 00:16:23.607 16:31:41 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:16:23.607 16:31:41 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:26.187 Initializing NVMe Controllers 00:16:26.187 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:26.187 Controller IO queue size 128, less than required. 00:16:26.187 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:26.187 Controller IO queue size 128, less than required. 00:16:26.187 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:26.187 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:26.187 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:26.187 Initialization complete. Launching workers. 00:16:26.187 ======================================================== 00:16:26.187 Latency(us) 00:16:26.187 Device Information : IOPS MiB/s Average min max 00:16:26.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1384.04 346.01 93677.93 66685.73 175881.79 00:16:26.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 494.94 123.74 267066.80 147775.35 377209.08 00:16:26.187 ======================================================== 00:16:26.187 Total : 1878.98 469.75 139350.20 66685.73 377209.08 00:16:26.187 00:16:26.187 16:31:44 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:16:26.455 Initializing NVMe Controllers 00:16:26.455 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:26.455 Controller IO queue size 128, less than required. 00:16:26.455 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:26.455 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:16:26.455 Controller IO queue size 128, less than required. 00:16:26.455 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:26.456 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:16:26.456 WARNING: Some requested NVMe devices were skipped 00:16:26.456 No valid NVMe controllers or AIO or URING devices found 00:16:26.456 16:31:44 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:16:28.988 Initializing NVMe Controllers 00:16:28.988 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:28.988 Controller IO queue size 128, less than required. 00:16:28.988 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:28.988 Controller IO queue size 128, less than required. 00:16:28.988 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:28.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:28.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:28.988 Initialization complete. Launching workers. 00:16:28.988 00:16:28.988 ==================== 00:16:28.988 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:16:28.988 TCP transport: 00:16:28.988 polls: 9881 00:16:28.988 idle_polls: 5500 00:16:28.988 sock_completions: 4381 00:16:28.988 nvme_completions: 2843 00:16:28.988 submitted_requests: 4284 00:16:28.988 queued_requests: 1 00:16:28.988 00:16:28.988 ==================== 00:16:28.988 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:16:28.988 TCP transport: 00:16:28.988 polls: 12381 00:16:28.988 idle_polls: 8741 00:16:28.988 sock_completions: 3640 00:16:28.988 nvme_completions: 6989 00:16:28.988 submitted_requests: 10382 00:16:28.988 queued_requests: 1 00:16:28.988 ======================================================== 00:16:28.988 Latency(us) 00:16:28.988 Device Information : IOPS MiB/s Average min max 00:16:28.988 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 710.46 177.61 187747.44 114357.04 336946.31 00:16:28.988 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1746.89 436.72 73010.08 40929.11 122376.76 00:16:28.988 ======================================================== 00:16:28.988 Total : 2457.35 614.34 106182.37 40929.11 336946.31 00:16:28.988 00:16:28.988 16:31:47 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:16:28.988 16:31:47 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:29.247 16:31:47 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:16:29.247 16:31:47 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:16:29.247 16:31:47 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:16:29.247 16:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:29.247 16:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:16:29.247 16:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:29.247 16:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:16:29.247 16:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:29.247 16:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:29.247 rmmod nvme_tcp 00:16:29.247 rmmod nvme_fabrics 00:16:29.247 rmmod nvme_keyring 00:16:29.247 16:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:29.247 16:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:16:29.247 16:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:16:29.247 16:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 87124 ']' 00:16:29.247 16:31:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 87124 00:16:29.247 16:31:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 87124 ']' 00:16:29.247 16:31:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 87124 00:16:29.506 16:31:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:16:29.506 16:31:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:29.506 16:31:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87124 00:16:29.506 16:31:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:29.506 16:31:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:29.506 killing process with pid 87124 00:16:29.506 16:31:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87124' 00:16:29.506 16:31:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 87124 00:16:29.506 16:31:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 87124 00:16:30.073 16:31:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:30.073 16:31:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:30.073 16:31:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:30.073 16:31:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:30.073 16:31:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:30.073 16:31:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.073 16:31:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:30.073 16:31:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.332 16:31:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:30.332 00:16:30.332 real 0m14.369s 00:16:30.332 user 0m52.374s 00:16:30.332 sys 0m3.688s 00:16:30.332 16:31:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:30.332 ************************************ 00:16:30.332 END TEST nvmf_perf 00:16:30.332 ************************************ 00:16:30.332 16:31:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:30.332 16:31:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:30.332 16:31:48 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:30.332 16:31:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:30.332 16:31:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:30.333 16:31:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:30.333 ************************************ 00:16:30.333 START TEST nvmf_fio_host 00:16:30.333 ************************************ 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:30.333 * Looking for test storage... 00:16:30.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:30.333 Cannot find device "nvmf_tgt_br" 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:30.333 Cannot find device "nvmf_tgt_br2" 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:30.333 Cannot find device "nvmf_tgt_br" 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:30.333 Cannot find device "nvmf_tgt_br2" 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:16:30.333 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:30.593 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:30.593 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:30.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:16:30.593 00:16:30.593 --- 10.0.0.2 ping statistics --- 00:16:30.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.593 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:30.593 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:30.593 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:16:30.593 00:16:30.593 --- 10.0.0.3 ping statistics --- 00:16:30.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.593 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:30.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:30.593 00:16:30.593 --- 10.0.0.1 ping statistics --- 00:16:30.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.593 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:30.593 16:31:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:30.852 16:31:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:16:30.852 16:31:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:16:30.852 16:31:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:30.852 16:31:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.852 16:31:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=87604 00:16:30.852 16:31:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:30.852 16:31:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:30.852 16:31:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 87604 00:16:30.852 16:31:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 87604 ']' 00:16:30.852 16:31:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.852 16:31:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:30.852 16:31:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.852 16:31:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:30.852 16:31:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.852 [2024-07-21 16:31:48.873357] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:16:30.852 [2024-07-21 16:31:48.873442] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.852 [2024-07-21 16:31:49.011854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:31.110 [2024-07-21 16:31:49.106667] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.110 [2024-07-21 16:31:49.106961] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.110 [2024-07-21 16:31:49.107035] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:31.111 [2024-07-21 16:31:49.107144] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:31.111 [2024-07-21 16:31:49.107204] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.111 [2024-07-21 16:31:49.107418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.111 [2024-07-21 16:31:49.107479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.111 [2024-07-21 16:31:49.108352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:31.111 [2024-07-21 16:31:49.108359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.043 16:31:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:32.043 16:31:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:16:32.043 16:31:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:32.043 [2024-07-21 16:31:50.071884] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:32.043 16:31:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:16:32.043 16:31:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:32.043 16:31:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.043 16:31:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:32.314 Malloc1 00:16:32.314 16:31:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:32.572 16:31:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:32.829 16:31:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:33.086 [2024-07-21 16:31:51.172056] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:33.086 16:31:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:33.344 16:31:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:16:33.344 16:31:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:33.344 16:31:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:33.344 16:31:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:33.344 16:31:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:33.344 16:31:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:33.344 16:31:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:33.344 16:31:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:16:33.344 16:31:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:33.344 16:31:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:33.344 16:31:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:33.344 16:31:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:33.344 16:31:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:16:33.344 16:31:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:33.344 16:31:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:33.344 16:31:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:33.344 16:31:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:33.344 16:31:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:16:33.344 16:31:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:33.344 16:31:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:33.344 16:31:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:33.344 16:31:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:33.344 16:31:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:33.600 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:33.600 fio-3.35 00:16:33.600 Starting 1 thread 00:16:36.128 00:16:36.128 test: (groupid=0, jobs=1): err= 0: pid=87734: Sun Jul 21 16:31:53 2024 00:16:36.128 read: IOPS=8874, BW=34.7MiB/s (36.4MB/s)(69.6MiB/2007msec) 00:16:36.128 slat (nsec): min=1714, max=413563, avg=2652.80, stdev=3883.78 00:16:36.128 clat (usec): min=3388, max=13754, avg=7540.23, stdev=711.75 00:16:36.128 lat (usec): min=3430, max=13757, avg=7542.89, stdev=711.76 00:16:36.128 clat percentiles (usec): 00:16:36.128 | 1.00th=[ 6259], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 7046], 00:16:36.128 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7570], 00:16:36.128 | 70.00th=[ 7767], 80.00th=[ 7963], 90.00th=[ 8291], 95.00th=[ 8717], 00:16:36.128 | 99.00th=[10159], 99.50th=[10814], 99.90th=[12518], 99.95th=[12780], 00:16:36.128 | 99.99th=[13698] 00:16:36.128 bw ( KiB/s): min=34496, max=36088, per=99.94%, avg=35478.00, stdev=689.75, samples=4 00:16:36.128 iops : min= 8624, max= 9022, avg=8869.50, stdev=172.44, samples=4 00:16:36.128 write: IOPS=8886, BW=34.7MiB/s (36.4MB/s)(69.7MiB/2007msec); 0 zone resets 00:16:36.128 slat (nsec): min=1830, max=274755, avg=2791.42, stdev=2558.03 00:16:36.128 clat (usec): min=2608, max=12970, avg=6814.66, stdev=639.06 00:16:36.128 lat (usec): min=2622, max=12973, avg=6817.45, stdev=639.10 00:16:36.128 clat percentiles (usec): 00:16:36.128 | 1.00th=[ 5538], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 6390], 00:16:36.128 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6783], 60.00th=[ 6849], 00:16:36.128 | 70.00th=[ 7046], 80.00th=[ 7177], 90.00th=[ 7504], 95.00th=[ 7767], 00:16:36.128 | 99.00th=[ 8979], 99.50th=[ 9765], 99.90th=[10945], 99.95th=[11863], 00:16:36.128 | 99.99th=[12911] 00:16:36.128 bw ( KiB/s): min=33776, max=36296, per=100.00%, avg=35564.00, stdev=1198.66, samples=4 00:16:36.128 iops : min= 8444, max= 9074, avg=8891.00, stdev=299.66, samples=4 00:16:36.128 lat (msec) : 4=0.06%, 10=99.16%, 20=0.77% 00:16:36.128 cpu : usr=60.97%, sys=28.41%, ctx=7, majf=0, minf=7 00:16:36.128 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:36.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:36.128 issued rwts: total=17812,17836,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.128 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:36.128 00:16:36.128 Run status group 0 (all jobs): 00:16:36.128 READ: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=69.6MiB (73.0MB), run=2007-2007msec 00:16:36.128 WRITE: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=69.7MiB (73.1MB), run=2007-2007msec 00:16:36.128 16:31:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:36.128 16:31:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:36.128 16:31:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:36.128 16:31:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:36.128 16:31:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:36.128 16:31:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:36.128 16:31:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:16:36.128 16:31:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:36.128 16:31:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:36.128 16:31:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:36.128 16:31:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:16:36.128 16:31:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:36.128 16:31:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:36.128 16:31:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:36.128 16:31:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:36.128 16:31:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:36.128 16:31:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:36.128 16:31:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:16:36.128 16:31:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:36.128 16:31:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:36.128 16:31:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:36.128 16:31:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:36.128 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:16:36.128 fio-3.35 00:16:36.128 Starting 1 thread 00:16:38.656 00:16:38.656 test: (groupid=0, jobs=1): err= 0: pid=87778: Sun Jul 21 16:31:56 2024 00:16:38.656 read: IOPS=8483, BW=133MiB/s (139MB/s)(266MiB/2007msec) 00:16:38.656 slat (usec): min=2, max=111, avg= 3.49, stdev= 2.18 00:16:38.656 clat (usec): min=2479, max=17538, avg=8926.18, stdev=2118.06 00:16:38.656 lat (usec): min=2483, max=17542, avg=8929.66, stdev=2118.09 00:16:38.656 clat percentiles (usec): 00:16:38.656 | 1.00th=[ 4883], 5.00th=[ 5735], 10.00th=[ 6259], 20.00th=[ 6980], 00:16:38.656 | 30.00th=[ 7570], 40.00th=[ 8225], 50.00th=[ 8848], 60.00th=[ 9503], 00:16:38.656 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11338], 95.00th=[12387], 00:16:38.656 | 99.00th=[14746], 99.50th=[15008], 99.90th=[15664], 99.95th=[16057], 00:16:38.656 | 99.99th=[16450] 00:16:38.656 bw ( KiB/s): min=56576, max=79520, per=51.19%, avg=69480.00, stdev=10410.95, samples=4 00:16:38.656 iops : min= 3536, max= 4970, avg=4342.50, stdev=650.68, samples=4 00:16:38.656 write: IOPS=5075, BW=79.3MiB/s (83.2MB/s)(142MiB/1789msec); 0 zone resets 00:16:38.656 slat (usec): min=30, max=362, avg=34.45, stdev= 9.25 00:16:38.656 clat (usec): min=3475, max=17071, avg=10794.46, stdev=1770.92 00:16:38.656 lat (usec): min=3511, max=17102, avg=10828.91, stdev=1770.88 00:16:38.656 clat percentiles (usec): 00:16:38.656 | 1.00th=[ 7177], 5.00th=[ 8356], 10.00th=[ 8717], 20.00th=[ 9241], 00:16:38.656 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10683], 60.00th=[11076], 00:16:38.656 | 70.00th=[11600], 80.00th=[12125], 90.00th=[13173], 95.00th=[14091], 00:16:38.656 | 99.00th=[15401], 99.50th=[15926], 99.90th=[16581], 99.95th=[16909], 00:16:38.656 | 99.99th=[17171] 00:16:38.656 bw ( KiB/s): min=60000, max=81952, per=88.93%, avg=72216.00, stdev=10019.57, samples=4 00:16:38.656 iops : min= 3750, max= 5122, avg=4513.50, stdev=626.22, samples=4 00:16:38.656 lat (msec) : 4=0.20%, 10=55.74%, 20=44.06% 00:16:38.656 cpu : usr=66.75%, sys=21.34%, ctx=31, majf=0, minf=24 00:16:38.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:38.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:38.656 issued rwts: total=17027,9080,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:38.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:38.656 00:16:38.656 Run status group 0 (all jobs): 00:16:38.656 READ: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=266MiB (279MB), run=2007-2007msec 00:16:38.656 WRITE: bw=79.3MiB/s (83.2MB/s), 79.3MiB/s-79.3MiB/s (83.2MB/s-83.2MB/s), io=142MiB (149MB), run=1789-1789msec 00:16:38.656 16:31:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:38.656 16:31:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:16:38.656 16:31:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:16:38.656 16:31:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:16:38.656 16:31:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:16:38.656 16:31:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:38.656 16:31:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:16:38.656 16:31:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:38.656 16:31:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:16:38.656 16:31:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:38.656 16:31:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:38.656 rmmod nvme_tcp 00:16:38.656 rmmod nvme_fabrics 00:16:38.656 rmmod nvme_keyring 00:16:38.656 16:31:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:38.656 16:31:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:16:38.656 16:31:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:16:38.656 16:31:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 87604 ']' 00:16:38.656 16:31:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 87604 00:16:38.656 16:31:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 87604 ']' 00:16:38.656 16:31:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 87604 00:16:38.656 16:31:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:16:38.656 16:31:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:38.656 16:31:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87604 00:16:38.656 16:31:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:38.656 16:31:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:38.656 16:31:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87604' 00:16:38.656 killing process with pid 87604 00:16:38.656 16:31:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 87604 00:16:38.656 16:31:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 87604 00:16:38.914 16:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:38.914 16:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:38.914 16:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:38.914 16:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:38.914 16:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:38.914 16:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.914 16:31:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:38.914 16:31:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.172 16:31:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:39.172 ************************************ 00:16:39.172 END TEST nvmf_fio_host 00:16:39.172 ************************************ 00:16:39.172 00:16:39.172 real 0m8.802s 00:16:39.172 user 0m35.598s 00:16:39.172 sys 0m2.582s 00:16:39.172 16:31:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:39.172 16:31:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.172 16:31:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:39.172 16:31:57 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:39.172 16:31:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:39.172 16:31:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:39.172 16:31:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:39.172 ************************************ 00:16:39.172 START TEST nvmf_failover 00:16:39.172 ************************************ 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:39.172 * Looking for test storage... 00:16:39.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:39.172 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:39.173 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:39.173 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:39.173 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:39.173 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:39.173 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:39.173 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:39.173 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:39.173 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:39.173 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:39.173 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:39.173 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:39.173 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:39.173 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:39.173 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:39.173 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:39.173 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:39.173 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:39.173 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:39.173 Cannot find device "nvmf_tgt_br" 00:16:39.173 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:16:39.173 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:39.173 Cannot find device "nvmf_tgt_br2" 00:16:39.173 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:16:39.173 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:39.173 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:39.433 Cannot find device "nvmf_tgt_br" 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:39.433 Cannot find device "nvmf_tgt_br2" 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:39.433 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:39.433 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:39.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:39.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:16:39.433 00:16:39.433 --- 10.0.0.2 ping statistics --- 00:16:39.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.433 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:39.433 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:39.433 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:16:39.433 00:16:39.433 --- 10.0.0.3 ping statistics --- 00:16:39.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.433 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:16:39.433 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:39.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:39.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:39.692 00:16:39.692 --- 10.0.0.1 ping statistics --- 00:16:39.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.692 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:39.692 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:39.692 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:16:39.692 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:39.692 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:39.692 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:39.692 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:39.692 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:39.692 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:39.692 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:39.692 16:31:57 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:16:39.692 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:39.692 16:31:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:39.692 16:31:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:39.692 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=88000 00:16:39.692 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:39.692 16:31:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 88000 00:16:39.692 16:31:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 88000 ']' 00:16:39.692 16:31:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.692 16:31:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:39.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.692 16:31:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.692 16:31:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:39.692 16:31:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:39.692 [2024-07-21 16:31:57.716893] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:16:39.692 [2024-07-21 16:31:57.716963] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.692 [2024-07-21 16:31:57.848940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:39.951 [2024-07-21 16:31:57.941986] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.951 [2024-07-21 16:31:57.942052] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.951 [2024-07-21 16:31:57.942061] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:39.951 [2024-07-21 16:31:57.942068] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:39.951 [2024-07-21 16:31:57.942075] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.951 [2024-07-21 16:31:57.942341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.951 [2024-07-21 16:31:57.942858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.951 [2024-07-21 16:31:57.943024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:40.517 16:31:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:40.517 16:31:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:16:40.517 16:31:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:40.517 16:31:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:40.517 16:31:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:40.517 16:31:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.517 16:31:58 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:40.774 [2024-07-21 16:31:58.975400] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.031 16:31:58 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:41.289 Malloc0 00:16:41.289 16:31:59 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:41.289 16:31:59 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:41.547 16:31:59 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:41.805 [2024-07-21 16:31:59.876596] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.805 16:31:59 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:42.062 [2024-07-21 16:32:00.084830] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:42.062 16:32:00 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:42.320 [2024-07-21 16:32:00.289067] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:42.320 16:32:00 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:16:42.320 16:32:00 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=88107 00:16:42.320 16:32:00 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:42.320 16:32:00 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 88107 /var/tmp/bdevperf.sock 00:16:42.320 16:32:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 88107 ']' 00:16:42.320 16:32:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:42.320 16:32:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:42.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:42.320 16:32:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:42.320 16:32:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:42.320 16:32:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:43.255 16:32:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:43.255 16:32:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:16:43.255 16:32:01 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:43.513 NVMe0n1 00:16:43.513 16:32:01 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:43.770 00:16:43.770 16:32:01 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:43.770 16:32:01 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=88156 00:16:43.770 16:32:01 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:16:45.141 16:32:02 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:45.141 [2024-07-21 16:32:03.134559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x629f90 is same with the state(5) to be set 00:16:45.141 [2024-07-21 16:32:03.134659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x629f90 is same with the state(5) to be set 00:16:45.141 [2024-07-21 16:32:03.134669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x629f90 is same with the state(5) to be set 00:16:45.141 [2024-07-21 16:32:03.134677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x629f90 is same with the state(5) to be set 00:16:45.141 [2024-07-21 16:32:03.134684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x629f90 is same with the state(5) to be set 00:16:45.141 [2024-07-21 16:32:03.134691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x629f90 is same with the state(5) to be set 00:16:45.141 [2024-07-21 16:32:03.134699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x629f90 is same with the state(5) to be set 00:16:45.141 [2024-07-21 16:32:03.134707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x629f90 is same with the state(5) to be set 00:16:45.141 [2024-07-21 16:32:03.134714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x629f90 is same with the state(5) to be set 00:16:45.141 [2024-07-21 16:32:03.134721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x629f90 is same with the state(5) to be set 00:16:45.141 [2024-07-21 16:32:03.134729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x629f90 is same with the state(5) to be set 00:16:45.141 [2024-07-21 16:32:03.134737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x629f90 is same with the state(5) to be set 00:16:45.141 [2024-07-21 16:32:03.134745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x629f90 is same with the state(5) to be set 00:16:45.141 16:32:03 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:16:48.435 16:32:06 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:48.435 00:16:48.435 16:32:06 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:48.693 [2024-07-21 16:32:06.705963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 [2024-07-21 16:32:06.706018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 [2024-07-21 16:32:06.706050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 [2024-07-21 16:32:06.706073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 [2024-07-21 16:32:06.706081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 [2024-07-21 16:32:06.706088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 [2024-07-21 16:32:06.706096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 [2024-07-21 16:32:06.706104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 [2024-07-21 16:32:06.706117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 [2024-07-21 16:32:06.706125] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 [2024-07-21 16:32:06.706132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 [2024-07-21 16:32:06.706139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 [2024-07-21 16:32:06.706147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 [2024-07-21 16:32:06.706155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 [2024-07-21 16:32:06.706162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 [2024-07-21 16:32:06.706170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 [2024-07-21 16:32:06.706193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 [2024-07-21 16:32:06.706258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 [2024-07-21 16:32:06.706281] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 [2024-07-21 16:32:06.706290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 [2024-07-21 16:32:06.706299] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 [2024-07-21 16:32:06.706312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 [2024-07-21 16:32:06.706321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 [2024-07-21 16:32:06.706330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 [2024-07-21 16:32:06.706339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 [2024-07-21 16:32:06.706347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 [2024-07-21 16:32:06.706356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 [2024-07-21 16:32:06.706364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 [2024-07-21 16:32:06.706374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 [2024-07-21 16:32:06.706383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62b4e0 is same with the state(5) to be set 00:16:48.694 16:32:06 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:16:51.971 16:32:09 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:51.971 [2024-07-21 16:32:09.968907] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.971 16:32:09 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:16:52.911 16:32:10 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:53.181 [2024-07-21 16:32:11.248737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.181 [2024-07-21 16:32:11.248807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.181 [2024-07-21 16:32:11.248817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.181 [2024-07-21 16:32:11.248825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.181 [2024-07-21 16:32:11.248832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.181 [2024-07-21 16:32:11.248839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.181 [2024-07-21 16:32:11.248846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.181 [2024-07-21 16:32:11.248853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.181 [2024-07-21 16:32:11.248860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.181 [2024-07-21 16:32:11.248867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.181 [2024-07-21 16:32:11.248874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.181 [2024-07-21 16:32:11.248881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.181 [2024-07-21 16:32:11.248888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.181 [2024-07-21 16:32:11.248895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.181 [2024-07-21 16:32:11.248902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.181 [2024-07-21 16:32:11.248909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.181 [2024-07-21 16:32:11.248925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.181 [2024-07-21 16:32:11.248932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.181 [2024-07-21 16:32:11.248941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.181 [2024-07-21 16:32:11.248950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.181 [2024-07-21 16:32:11.248957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.181 [2024-07-21 16:32:11.248964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.181 [2024-07-21 16:32:11.248972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.181 [2024-07-21 16:32:11.248979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.181 [2024-07-21 16:32:11.248986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.181 [2024-07-21 16:32:11.248994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.181 [2024-07-21 16:32:11.249001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.181 [2024-07-21 16:32:11.249008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.181 [2024-07-21 16:32:11.249017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.181 [2024-07-21 16:32:11.249024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249281] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 [2024-07-21 16:32:11.249346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62bbc0 is same with the state(5) to be set 00:16:53.182 16:32:11 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 88156 00:16:59.750 0 00:16:59.750 16:32:17 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 88107 00:16:59.750 16:32:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 88107 ']' 00:16:59.750 16:32:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 88107 00:16:59.750 16:32:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:16:59.750 16:32:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:59.750 16:32:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88107 00:16:59.750 16:32:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:59.750 killing process with pid 88107 00:16:59.750 16:32:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:59.750 16:32:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88107' 00:16:59.750 16:32:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 88107 00:16:59.750 16:32:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 88107 00:16:59.750 16:32:17 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:59.750 [2024-07-21 16:32:00.349958] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:16:59.750 [2024-07-21 16:32:00.350061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88107 ] 00:16:59.750 [2024-07-21 16:32:00.486345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.750 [2024-07-21 16:32:00.614109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.750 Running I/O for 15 seconds... 00:16:59.750 [2024-07-21 16:32:03.135532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.750 [2024-07-21 16:32:03.135591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.135641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.750 [2024-07-21 16:32:03.135691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.135723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.750 [2024-07-21 16:32:03.135735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.135747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.750 [2024-07-21 16:32:03.135759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.135771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.750 [2024-07-21 16:32:03.135783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.135796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.750 [2024-07-21 16:32:03.135807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.135820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.750 [2024-07-21 16:32:03.135832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.135845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.750 [2024-07-21 16:32:03.135856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.135870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.750 [2024-07-21 16:32:03.135881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.135894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.750 [2024-07-21 16:32:03.135905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.135918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.750 [2024-07-21 16:32:03.135931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.135973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.750 [2024-07-21 16:32:03.135985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.135999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.750 [2024-07-21 16:32:03.136010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.136025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.750 [2024-07-21 16:32:03.136037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.136051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.750 [2024-07-21 16:32:03.136062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.136074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.750 [2024-07-21 16:32:03.136086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.136099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.750 [2024-07-21 16:32:03.136110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.136123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.750 [2024-07-21 16:32:03.136135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.136148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.750 [2024-07-21 16:32:03.136159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.136172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.750 [2024-07-21 16:32:03.136184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.136197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.750 [2024-07-21 16:32:03.136209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.136221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.750 [2024-07-21 16:32:03.136232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.136245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.750 [2024-07-21 16:32:03.136256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.136286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.750 [2024-07-21 16:32:03.136326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.136342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.750 [2024-07-21 16:32:03.136369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.136387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.750 [2024-07-21 16:32:03.136400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.136415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.750 [2024-07-21 16:32:03.136428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.136443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.750 [2024-07-21 16:32:03.136455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.136470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.750 [2024-07-21 16:32:03.136484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.136510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.750 [2024-07-21 16:32:03.136527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.136542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.750 [2024-07-21 16:32:03.136555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.136570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.750 [2024-07-21 16:32:03.136583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.750 [2024-07-21 16:32:03.136597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.750 [2024-07-21 16:32:03.136610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.136641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.751 [2024-07-21 16:32:03.136654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.136707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.751 [2024-07-21 16:32:03.136735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.136748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.751 [2024-07-21 16:32:03.136775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.136793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.751 [2024-07-21 16:32:03.136810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.136824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.136835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.136849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.136861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.136892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.136904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.136917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.136929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.136948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.136961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.136974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.136986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.751 [2024-07-21 16:32:03.137853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.751 [2024-07-21 16:32:03.137865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.137887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.137901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.137915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.137926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.137939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.752 [2024-07-21 16:32:03.137951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.137964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.752 [2024-07-21 16:32:03.137976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.137990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.752 [2024-07-21 16:32:03.138002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.752 [2024-07-21 16:32:03.138028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.752 [2024-07-21 16:32:03.138053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.752 [2024-07-21 16:32:03.138078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.752 [2024-07-21 16:32:03.138103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.752 [2024-07-21 16:32:03.138128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.138153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.138184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.138250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.138276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.138313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.138340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.138372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.138396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.138422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.138447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.138483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.138517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.138556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.138581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.138613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.138638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.138663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.138687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.138712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.138737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.138762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.138787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.138817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.138842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.138868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.138903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.138928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.138960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.138985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.138998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.139010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.139023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.752 [2024-07-21 16:32:03.139035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.139048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.752 [2024-07-21 16:32:03.139060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.752 [2024-07-21 16:32:03.139094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:59.753 [2024-07-21 16:32:03.139119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94600 len:8 PRP1 0x0 PRP2 0x0 00:16:59.753 [2024-07-21 16:32:03.139131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:03.139148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:59.753 [2024-07-21 16:32:03.139158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:59.753 [2024-07-21 16:32:03.139167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94608 len:8 PRP1 0x0 PRP2 0x0 00:16:59.753 [2024-07-21 16:32:03.139178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:03.139191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:59.753 [2024-07-21 16:32:03.139200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:59.753 [2024-07-21 16:32:03.139209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94616 len:8 PRP1 0x0 PRP2 0x0 00:16:59.753 [2024-07-21 16:32:03.139220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:03.139232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:59.753 [2024-07-21 16:32:03.139247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:59.753 [2024-07-21 16:32:03.139257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94624 len:8 PRP1 0x0 PRP2 0x0 00:16:59.753 [2024-07-21 16:32:03.139284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:03.139310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:59.753 [2024-07-21 16:32:03.139320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:59.753 [2024-07-21 16:32:03.139330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94632 len:8 PRP1 0x0 PRP2 0x0 00:16:59.753 [2024-07-21 16:32:03.139350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:03.139364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:59.753 [2024-07-21 16:32:03.139373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:59.753 [2024-07-21 16:32:03.139383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94640 len:8 PRP1 0x0 PRP2 0x0 00:16:59.753 [2024-07-21 16:32:03.139395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:03.139406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:59.753 [2024-07-21 16:32:03.139416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:59.753 [2024-07-21 16:32:03.139425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94648 len:8 PRP1 0x0 PRP2 0x0 00:16:59.753 [2024-07-21 16:32:03.139437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:03.139449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:59.753 [2024-07-21 16:32:03.139458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:59.753 [2024-07-21 16:32:03.139468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94656 len:8 PRP1 0x0 PRP2 0x0 00:16:59.753 [2024-07-21 16:32:03.139480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:03.139492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:59.753 [2024-07-21 16:32:03.139501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:59.753 [2024-07-21 16:32:03.139512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94664 len:8 PRP1 0x0 PRP2 0x0 00:16:59.753 [2024-07-21 16:32:03.139524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:03.139536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:59.753 [2024-07-21 16:32:03.139545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:59.753 [2024-07-21 16:32:03.139554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94672 len:8 PRP1 0x0 PRP2 0x0 00:16:59.753 [2024-07-21 16:32:03.139566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:03.139578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:59.753 [2024-07-21 16:32:03.139587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:59.753 [2024-07-21 16:32:03.139596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94680 len:8 PRP1 0x0 PRP2 0x0 00:16:59.753 [2024-07-21 16:32:03.139608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:03.139692] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1174c90 was disconnected and freed. reset controller. 00:16:59.753 [2024-07-21 16:32:03.139733] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:59.753 [2024-07-21 16:32:03.139788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.753 [2024-07-21 16:32:03.139807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:03.139821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.753 [2024-07-21 16:32:03.139842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:03.139855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.753 [2024-07-21 16:32:03.139867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:03.139879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.753 [2024-07-21 16:32:03.139890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:03.139902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:59.753 [2024-07-21 16:32:03.139947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8e30 (9): Bad file descriptor 00:16:59.753 [2024-07-21 16:32:03.143218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:59.753 [2024-07-21 16:32:03.178694] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:59.753 [2024-07-21 16:32:06.706698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.753 [2024-07-21 16:32:06.706764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:06.706780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.753 [2024-07-21 16:32:06.706792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:06.706813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.753 [2024-07-21 16:32:06.706834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:06.706849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.753 [2024-07-21 16:32:06.706860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:06.706872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f8e30 is same with the state(5) to be set 00:16:59.753 [2024-07-21 16:32:06.706959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.753 [2024-07-21 16:32:06.706979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:06.706999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.753 [2024-07-21 16:32:06.707012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:06.707025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.753 [2024-07-21 16:32:06.707037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:06.707051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.753 [2024-07-21 16:32:06.707063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:06.707076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.753 [2024-07-21 16:32:06.707132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:06.707146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.753 [2024-07-21 16:32:06.707158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:06.707170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.753 [2024-07-21 16:32:06.707181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:06.707194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.753 [2024-07-21 16:32:06.707216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:06.707228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.753 [2024-07-21 16:32:06.707241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:06.707253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.753 [2024-07-21 16:32:06.707339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:06.707356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.753 [2024-07-21 16:32:06.707370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:06.707385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.753 [2024-07-21 16:32:06.707397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:06.707414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.753 [2024-07-21 16:32:06.707428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.753 [2024-07-21 16:32:06.707442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.754 [2024-07-21 16:32:06.707454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.707469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.754 [2024-07-21 16:32:06.707481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.707495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.754 [2024-07-21 16:32:06.707508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.707523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.754 [2024-07-21 16:32:06.707535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.707560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.754 [2024-07-21 16:32:06.707573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.707587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.754 [2024-07-21 16:32:06.707600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.707614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.754 [2024-07-21 16:32:06.707628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.707642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.754 [2024-07-21 16:32:06.707671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.707700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.754 [2024-07-21 16:32:06.707739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.707752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.754 [2024-07-21 16:32:06.707763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.707776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.754 [2024-07-21 16:32:06.707795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.707819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.754 [2024-07-21 16:32:06.707830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.707843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.754 [2024-07-21 16:32:06.707854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.707867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.754 [2024-07-21 16:32:06.707879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.707893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.754 [2024-07-21 16:32:06.707910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.707924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.754 [2024-07-21 16:32:06.707936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.707949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.754 [2024-07-21 16:32:06.707983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.707997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.754 [2024-07-21 16:32:06.708008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.708022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.754 [2024-07-21 16:32:06.708049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.708062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.754 [2024-07-21 16:32:06.708073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.708096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.754 [2024-07-21 16:32:06.708107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.708120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.754 [2024-07-21 16:32:06.708131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.708144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.754 [2024-07-21 16:32:06.708154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.708167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.754 [2024-07-21 16:32:06.708187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.708199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.754 [2024-07-21 16:32:06.708222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.708234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.754 [2024-07-21 16:32:06.708246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.708258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.754 [2024-07-21 16:32:06.708271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.708314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.754 [2024-07-21 16:32:06.708327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.708340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.754 [2024-07-21 16:32:06.708352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.708372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.754 [2024-07-21 16:32:06.708384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.708398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.754 [2024-07-21 16:32:06.708411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.708424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.754 [2024-07-21 16:32:06.708436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.708449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.754 [2024-07-21 16:32:06.708460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.708473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.754 [2024-07-21 16:32:06.708485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.708498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.754 [2024-07-21 16:32:06.708510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.754 [2024-07-21 16:32:06.708523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.708535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.708548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.708559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.708572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.708584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.708597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.708618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.708631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.708659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.708672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.708683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.708696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.708723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.708745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.708758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.708770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.708795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.708807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.708827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.708840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.708851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.708868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.708881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.708895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.708906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.708935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.708946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.708959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.708978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.708991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.709003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.709015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.709027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.709040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.709052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.709065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.709077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.709090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.709108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.709122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.709134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.709156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.709167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.709180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.709201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.709223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.709250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.709263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.709275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.709288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.709310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.709323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.709335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.709348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.709360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.709373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.709384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.709397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.709408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.709421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.709440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.709453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.709465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.709484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.709496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.709508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.709520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.709532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.709543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.709556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.709567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.709580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.709591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.709604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.709615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.709628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.709647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.709660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.709672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.709684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.709696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.709709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.709720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.709734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.755 [2024-07-21 16:32:06.709745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.755 [2024-07-21 16:32:06.709758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.756 [2024-07-21 16:32:06.709769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.709782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.756 [2024-07-21 16:32:06.709799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.709813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.756 [2024-07-21 16:32:06.709825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.709838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.756 [2024-07-21 16:32:06.709850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.709863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.756 [2024-07-21 16:32:06.709875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.709888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.756 [2024-07-21 16:32:06.709900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.709913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.709926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.709939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.709951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.709964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.709976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.709990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.710001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.710015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.710026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.710040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.710058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.710071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.710089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.710102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.710113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.710126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.710143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.710156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.710167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.710190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.710202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.710245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.710264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.710290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.710307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.710322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.710336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.710351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.710364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.710380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.710393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.710409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.710423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.710439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.710453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.710468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.710482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.710497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.710510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.710525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.710591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.710615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.710632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.710656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.710673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.710687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.710698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.710711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.710723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.710736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.710747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.710760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.710771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.710784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.710795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.710808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.710820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.710832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:06.710844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.710869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:59.756 [2024-07-21 16:32:06.710890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:59.756 [2024-07-21 16:32:06.710899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16024 len:8 PRP1 0x0 PRP2 0x0 00:16:59.756 [2024-07-21 16:32:06.710910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:06.710987] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1176dc0 was disconnected and freed. reset controller. 00:16:59.756 [2024-07-21 16:32:06.711003] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:16:59.756 [2024-07-21 16:32:06.711016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:59.756 [2024-07-21 16:32:06.714684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:59.756 [2024-07-21 16:32:06.714834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8e30 (9): Bad file descriptor 00:16:59.756 [2024-07-21 16:32:06.751755] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:59.756 [2024-07-21 16:32:11.250736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:11.250793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:11.250821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.756 [2024-07-21 16:32:11.250835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.756 [2024-07-21 16:32:11.250850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.250862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.250875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.250887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.250900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.250911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.250924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.250936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.250949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.250960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.250973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.250985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.250997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.251009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.251033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.251057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.251081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.251137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.251161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.251185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.251208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.251233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.251258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.251313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.251340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.251364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.251388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.251414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.251438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.251472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.251497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.251521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.251546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.251571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.251596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.251622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.251647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.251672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.251714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.251738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:59.757 [2024-07-21 16:32:11.251763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.757 [2024-07-21 16:32:11.251787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.757 [2024-07-21 16:32:11.251819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.757 [2024-07-21 16:32:11.251843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.757 [2024-07-21 16:32:11.251867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.757 [2024-07-21 16:32:11.251891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.757 [2024-07-21 16:32:11.251915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.757 [2024-07-21 16:32:11.251939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.757 [2024-07-21 16:32:11.251964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.251976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.757 [2024-07-21 16:32:11.251987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.757 [2024-07-21 16:32:11.252000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.252976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.252994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.253009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.253021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.253049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.253062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.758 [2024-07-21 16:32:11.253075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.758 [2024-07-21 16:32:11.253094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.253970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.253983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.254001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.254015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.254027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.254040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.254052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.254065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.254077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.254090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.254102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.254116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.254128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.254142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.254153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.254167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.254179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.254192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:59.759 [2024-07-21 16:32:11.254204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.254262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:59.759 [2024-07-21 16:32:11.254287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18192 len:8 PRP1 0x0 PRP2 0x0 00:16:59.759 [2024-07-21 16:32:11.254302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.759 [2024-07-21 16:32:11.254319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:59.759 [2024-07-21 16:32:11.254329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:59.759 [2024-07-21 16:32:11.254338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18200 len:8 PRP1 0x0 PRP2 0x0 00:16:59.760 [2024-07-21 16:32:11.254359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.760 [2024-07-21 16:32:11.254373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:59.760 [2024-07-21 16:32:11.254382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:59.760 [2024-07-21 16:32:11.254400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:8 PRP1 0x0 PRP2 0x0 00:16:59.760 [2024-07-21 16:32:11.254412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.760 [2024-07-21 16:32:11.254424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:59.760 [2024-07-21 16:32:11.254433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:59.760 [2024-07-21 16:32:11.254443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18216 len:8 PRP1 0x0 PRP2 0x0 00:16:59.760 [2024-07-21 16:32:11.254455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.760 [2024-07-21 16:32:11.254473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:59.760 [2024-07-21 16:32:11.254482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:59.760 [2024-07-21 16:32:11.254493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18224 len:8 PRP1 0x0 PRP2 0x0 00:16:59.760 [2024-07-21 16:32:11.254506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.760 [2024-07-21 16:32:11.254590] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1176a80 was disconnected and freed. reset controller. 00:16:59.760 [2024-07-21 16:32:11.254608] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:16:59.760 [2024-07-21 16:32:11.254674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.760 [2024-07-21 16:32:11.254693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.760 [2024-07-21 16:32:11.254707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.760 [2024-07-21 16:32:11.254719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.760 [2024-07-21 16:32:11.254732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.760 [2024-07-21 16:32:11.254743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.760 [2024-07-21 16:32:11.254756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.760 [2024-07-21 16:32:11.254768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:59.760 [2024-07-21 16:32:11.254780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:59.760 [2024-07-21 16:32:11.254812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8e30 (9): Bad file descriptor 00:16:59.760 [2024-07-21 16:32:11.258348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:59.760 [2024-07-21 16:32:11.293591] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:59.760 00:16:59.760 Latency(us) 00:16:59.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.760 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:59.760 Verification LBA range: start 0x0 length 0x4000 00:16:59.760 NVMe0n1 : 15.00 10175.87 39.75 253.06 0.00 12247.36 506.41 19184.17 00:16:59.760 =================================================================================================================== 00:16:59.760 Total : 10175.87 39.75 253.06 0.00 12247.36 506.41 19184.17 00:16:59.760 Received shutdown signal, test time was about 15.000000 seconds 00:16:59.760 00:16:59.760 Latency(us) 00:16:59.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.760 =================================================================================================================== 00:16:59.760 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:59.760 16:32:17 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:59.760 16:32:17 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:16:59.760 16:32:17 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:16:59.760 16:32:17 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=88359 00:16:59.760 16:32:17 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:59.760 16:32:17 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 88359 /var/tmp/bdevperf.sock 00:16:59.760 16:32:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 88359 ']' 00:16:59.760 16:32:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:59.760 16:32:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:59.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:59.760 16:32:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:59.760 16:32:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:59.760 16:32:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:00.350 16:32:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:00.351 16:32:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:17:00.351 16:32:18 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:00.608 [2024-07-21 16:32:18.591598] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:00.608 16:32:18 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:00.608 [2024-07-21 16:32:18.815805] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:00.865 16:32:18 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:01.123 NVMe0n1 00:17:01.123 16:32:19 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:01.381 00:17:01.381 16:32:19 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:01.640 00:17:01.640 16:32:19 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:01.640 16:32:19 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:17:01.900 16:32:20 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:02.159 16:32:20 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:17:05.464 16:32:23 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:05.464 16:32:23 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:17:05.464 16:32:23 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:05.464 16:32:23 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=88496 00:17:05.464 16:32:23 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 88496 00:17:06.837 0 00:17:06.837 16:32:24 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:06.837 [2024-07-21 16:32:17.444425] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:17:06.837 [2024-07-21 16:32:17.444549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88359 ] 00:17:06.837 [2024-07-21 16:32:17.581830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.837 [2024-07-21 16:32:17.679833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.837 [2024-07-21 16:32:20.238431] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:06.837 [2024-07-21 16:32:20.238540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.837 [2024-07-21 16:32:20.238566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.837 [2024-07-21 16:32:20.238584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.837 [2024-07-21 16:32:20.238612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.837 [2024-07-21 16:32:20.238650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.837 [2024-07-21 16:32:20.238662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.837 [2024-07-21 16:32:20.238675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.837 [2024-07-21 16:32:20.238687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:06.837 [2024-07-21 16:32:20.238699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:06.837 [2024-07-21 16:32:20.238765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:06.837 [2024-07-21 16:32:20.238800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db2e30 (9): Bad file descriptor 00:17:06.838 [2024-07-21 16:32:20.241776] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:06.838 Running I/O for 1 seconds... 00:17:06.838 00:17:06.838 Latency(us) 00:17:06.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.838 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:06.838 Verification LBA range: start 0x0 length 0x4000 00:17:06.838 NVMe0n1 : 1.01 9090.15 35.51 0.00 0.00 14010.92 1824.58 15609.48 00:17:06.838 =================================================================================================================== 00:17:06.838 Total : 9090.15 35.51 0.00 0.00 14010.92 1824.58 15609.48 00:17:06.838 16:32:24 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:06.838 16:32:24 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:17:06.838 16:32:24 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:07.096 16:32:25 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:07.096 16:32:25 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:17:07.354 16:32:25 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:07.612 16:32:25 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:17:10.893 16:32:28 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:10.893 16:32:28 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:17:10.893 16:32:28 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 88359 00:17:10.893 16:32:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 88359 ']' 00:17:10.893 16:32:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 88359 00:17:10.893 16:32:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:17:10.893 16:32:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:10.893 16:32:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88359 00:17:10.893 killing process with pid 88359 00:17:10.893 16:32:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:10.893 16:32:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:10.894 16:32:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88359' 00:17:10.894 16:32:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 88359 00:17:10.894 16:32:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 88359 00:17:11.152 16:32:29 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:17:11.152 16:32:29 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:11.409 16:32:29 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:17:11.409 16:32:29 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:11.409 16:32:29 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:17:11.409 16:32:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:11.409 16:32:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:17:11.409 16:32:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:11.409 16:32:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:17:11.409 16:32:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:11.409 16:32:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:11.409 rmmod nvme_tcp 00:17:11.409 rmmod nvme_fabrics 00:17:11.409 rmmod nvme_keyring 00:17:11.409 16:32:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:11.409 16:32:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:17:11.409 16:32:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:17:11.409 16:32:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 88000 ']' 00:17:11.409 16:32:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 88000 00:17:11.409 16:32:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 88000 ']' 00:17:11.409 16:32:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 88000 00:17:11.409 16:32:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:17:11.409 16:32:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:11.409 16:32:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88000 00:17:11.409 killing process with pid 88000 00:17:11.409 16:32:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:11.409 16:32:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:11.409 16:32:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88000' 00:17:11.409 16:32:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 88000 00:17:11.409 16:32:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 88000 00:17:11.976 16:32:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:11.976 16:32:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:11.976 16:32:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:11.976 16:32:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:11.976 16:32:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:11.976 16:32:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.976 16:32:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:11.976 16:32:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.976 16:32:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:11.976 00:17:11.976 real 0m32.777s 00:17:11.976 user 2m7.027s 00:17:11.976 sys 0m4.824s 00:17:11.976 16:32:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:11.976 16:32:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:11.976 ************************************ 00:17:11.976 END TEST nvmf_failover 00:17:11.976 ************************************ 00:17:11.976 16:32:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:11.976 16:32:30 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:11.976 16:32:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:11.976 16:32:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:11.976 16:32:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:11.976 ************************************ 00:17:11.976 START TEST nvmf_host_discovery 00:17:11.976 ************************************ 00:17:11.976 16:32:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:11.976 * Looking for test storage... 00:17:11.976 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:11.977 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:11.977 Cannot find device "nvmf_tgt_br" 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:12.236 Cannot find device "nvmf_tgt_br2" 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:12.236 Cannot find device "nvmf_tgt_br" 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:12.236 Cannot find device "nvmf_tgt_br2" 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:12.236 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:12.236 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:12.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:12.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:17:12.236 00:17:12.236 --- 10.0.0.2 ping statistics --- 00:17:12.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.236 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:12.236 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:12.236 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:17:12.236 00:17:12.236 --- 10.0.0.3 ping statistics --- 00:17:12.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.236 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:12.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:12.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:12.236 00:17:12.236 --- 10.0.0.1 ping statistics --- 00:17:12.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.236 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:12.236 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:12.494 16:32:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:17:12.494 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:12.494 16:32:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:12.494 16:32:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:12.494 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=88798 00:17:12.494 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:12.494 16:32:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 88798 00:17:12.494 16:32:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 88798 ']' 00:17:12.494 16:32:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.494 16:32:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:12.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.494 16:32:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.494 16:32:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:12.494 16:32:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:12.494 [2024-07-21 16:32:30.528165] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:17:12.494 [2024-07-21 16:32:30.528283] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.494 [2024-07-21 16:32:30.667741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.754 [2024-07-21 16:32:30.773371] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.754 [2024-07-21 16:32:30.773424] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.754 [2024-07-21 16:32:30.773448] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:12.754 [2024-07-21 16:32:30.773456] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:12.754 [2024-07-21 16:32:30.773462] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.754 [2024-07-21 16:32:30.773487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.319 16:32:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:13.319 16:32:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:17:13.319 16:32:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:13.319 16:32:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:13.319 16:32:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:13.578 [2024-07-21 16:32:31.571589] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:13.578 [2024-07-21 16:32:31.579811] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:13.578 null0 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:13.578 null1 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=88848 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 88848 /tmp/host.sock 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 88848 ']' 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:13.578 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:13.578 16:32:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:13.578 [2024-07-21 16:32:31.673020] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:17:13.578 [2024-07-21 16:32:31.673138] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88848 ] 00:17:13.836 [2024-07-21 16:32:31.814257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.836 [2024-07-21 16:32:31.937467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:17:14.771 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:14.772 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:14.772 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:14.772 16:32:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.772 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:14.772 16:32:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.772 16:32:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.772 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:17:14.772 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:17:14.772 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:14.772 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:14.772 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:14.772 16:32:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.772 16:32:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.772 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:14.772 16:32:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.772 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:17:14.772 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:17:14.772 16:32:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.772 16:32:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.772 16:32:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.772 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:17:14.772 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:14.772 16:32:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.772 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:14.772 16:32:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:14.772 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:14.772 16:32:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:14.772 16:32:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.030 [2024-07-21 16:32:33.064074] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.030 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.289 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.289 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:15.289 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:15.289 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:15.289 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:15.289 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:15.289 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:15.289 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:15.289 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.289 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.289 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:15.289 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:15.289 16:32:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:15.289 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.289 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:17:15.289 16:32:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:17:15.549 [2024-07-21 16:32:33.704499] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:15.549 [2024-07-21 16:32:33.704528] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:15.549 [2024-07-21 16:32:33.704563] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:15.807 [2024-07-21 16:32:33.790634] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:17:15.807 [2024-07-21 16:32:33.847485] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:15.807 [2024-07-21 16:32:33.847530] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:17:16.372 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.373 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.631 [2024-07-21 16:32:34.665024] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:16.631 [2024-07-21 16:32:34.665388] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:16.631 [2024-07-21 16:32:34.665418] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:16.631 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:16.632 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:16.632 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:16.632 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:16.632 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.632 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.632 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:16.632 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:16.632 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:16.632 [2024-07-21 16:32:34.752473] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:17:16.632 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.632 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:16.632 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:16.632 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:16.632 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:16.632 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:16.632 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:16.632 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:17:16.632 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:17:16.632 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:16.632 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:16.632 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:16.632 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.632 16:32:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:16.632 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.632 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.632 [2024-07-21 16:32:34.819061] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:16.632 [2024-07-21 16:32:34.819088] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:16.632 [2024-07-21 16:32:34.819110] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:16.890 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:17:16.890 16:32:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:17:17.833 16:32:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:17.833 16:32:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:17:17.833 16:32:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:17:17.833 16:32:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:17.833 16:32:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.833 16:32:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.833 16:32:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:17.833 16:32:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:17.833 16:32:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:17.833 16:32:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.833 16:32:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:17:17.833 16:32:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:17.833 16:32:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:17:17.833 16:32:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:17.833 16:32:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:17.833 16:32:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:17.833 16:32:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:17.833 16:32:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:17.833 16:32:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:17.833 16:32:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:17.833 16:32:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:17.833 16:32:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:17.833 16:32:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.834 16:32:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.834 16:32:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.834 16:32:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:17.834 16:32:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:17.834 16:32:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:17.834 16:32:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:17.834 16:32:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:17.834 16:32:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.834 16:32:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.834 [2024-07-21 16:32:35.961944] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:17.834 [2024-07-21 16:32:35.961974] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:17.834 [2024-07-21 16:32:35.965088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.834 [2024-07-21 16:32:35.965123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.834 [2024-07-21 16:32:35.965152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.834 [2024-07-21 16:32:35.965160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.834 [2024-07-21 16:32:35.965168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.834 [2024-07-21 16:32:35.965176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.834 [2024-07-21 16:32:35.965185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.834 [2024-07-21 16:32:35.965193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.834 [2024-07-21 16:32:35.965201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90c50 is same with the state(5) to be set 00:17:17.834 16:32:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.834 16:32:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:17.834 16:32:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:17.834 16:32:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:17.834 16:32:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:17.834 16:32:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:17.834 16:32:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:17.834 16:32:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:17.834 16:32:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.834 16:32:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.834 16:32:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:17.834 16:32:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:17.834 16:32:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:17.834 [2024-07-21 16:32:35.975052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf90c50 (9): Bad file descriptor 00:17:17.834 [2024-07-21 16:32:35.985072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:17.834 [2024-07-21 16:32:35.985181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:17.834 [2024-07-21 16:32:35.985202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf90c50 with addr=10.0.0.2, port=4420 00:17:17.834 [2024-07-21 16:32:35.985212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90c50 is same with the state(5) to be set 00:17:17.834 [2024-07-21 16:32:35.985228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf90c50 (9): Bad file descriptor 00:17:17.834 [2024-07-21 16:32:35.985255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:17.834 [2024-07-21 16:32:35.985266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:17.834 [2024-07-21 16:32:35.985297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:17.834 [2024-07-21 16:32:35.985314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:17.834 16:32:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.834 [2024-07-21 16:32:35.995139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:17.834 [2024-07-21 16:32:35.995232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:17.834 [2024-07-21 16:32:35.995251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf90c50 with addr=10.0.0.2, port=4420 00:17:17.834 [2024-07-21 16:32:35.995261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90c50 is same with the state(5) to be set 00:17:17.834 [2024-07-21 16:32:35.995307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf90c50 (9): Bad file descriptor 00:17:17.834 [2024-07-21 16:32:35.995322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:17.834 [2024-07-21 16:32:35.995331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:17.834 [2024-07-21 16:32:35.995339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:17.834 [2024-07-21 16:32:35.995353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:17.834 [2024-07-21 16:32:36.005204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:17.834 [2024-07-21 16:32:36.005354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:17.834 [2024-07-21 16:32:36.005375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf90c50 with addr=10.0.0.2, port=4420 00:17:17.834 [2024-07-21 16:32:36.005385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90c50 is same with the state(5) to be set 00:17:17.834 [2024-07-21 16:32:36.005409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf90c50 (9): Bad file descriptor 00:17:17.834 [2024-07-21 16:32:36.005424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:17.834 [2024-07-21 16:32:36.005431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:17.834 [2024-07-21 16:32:36.005439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:17.834 [2024-07-21 16:32:36.005453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:17.834 [2024-07-21 16:32:36.015276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:17.834 [2024-07-21 16:32:36.015388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:17.834 [2024-07-21 16:32:36.015408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf90c50 with addr=10.0.0.2, port=4420 00:17:17.834 [2024-07-21 16:32:36.015418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90c50 is same with the state(5) to be set 00:17:17.834 [2024-07-21 16:32:36.015434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf90c50 (9): Bad file descriptor 00:17:17.834 [2024-07-21 16:32:36.015447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:17.834 [2024-07-21 16:32:36.015455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:17.834 [2024-07-21 16:32:36.015463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:17.834 [2024-07-21 16:32:36.015477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:17.834 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.834 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:17.834 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:17.834 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:17.834 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:17.834 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:17.834 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:17.834 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:17.834 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:17.834 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.834 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.834 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:17.834 [2024-07-21 16:32:36.025357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:17.834 [2024-07-21 16:32:36.025425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:17.834 [2024-07-21 16:32:36.025443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf90c50 with addr=10.0.0.2, port=4420 00:17:17.834 [2024-07-21 16:32:36.025469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90c50 is same with the state(5) to be set 00:17:17.834 [2024-07-21 16:32:36.025483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf90c50 (9): Bad file descriptor 00:17:17.834 [2024-07-21 16:32:36.025496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:17.834 [2024-07-21 16:32:36.025504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:17.834 [2024-07-21 16:32:36.025512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:17.834 [2024-07-21 16:32:36.025525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:17.834 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:17.834 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:17.834 [2024-07-21 16:32:36.035398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:17.834 [2024-07-21 16:32:36.035501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:17.834 [2024-07-21 16:32:36.035521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf90c50 with addr=10.0.0.2, port=4420 00:17:17.834 [2024-07-21 16:32:36.035530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90c50 is same with the state(5) to be set 00:17:17.834 [2024-07-21 16:32:36.035545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf90c50 (9): Bad file descriptor 00:17:17.834 [2024-07-21 16:32:36.035558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:17.834 [2024-07-21 16:32:36.035566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:17.834 [2024-07-21 16:32:36.035574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:17.834 [2024-07-21 16:32:36.035587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:18.092 [2024-07-21 16:32:36.045467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:18.092 [2024-07-21 16:32:36.045558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.092 [2024-07-21 16:32:36.045577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf90c50 with addr=10.0.0.2, port=4420 00:17:18.092 [2024-07-21 16:32:36.045586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90c50 is same with the state(5) to be set 00:17:18.092 [2024-07-21 16:32:36.045600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf90c50 (9): Bad file descriptor 00:17:18.092 [2024-07-21 16:32:36.045613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:18.092 [2024-07-21 16:32:36.045635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:18.092 [2024-07-21 16:32:36.045643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:18.092 [2024-07-21 16:32:36.045655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:18.092 [2024-07-21 16:32:36.048014] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:17:18.092 [2024-07-21 16:32:36.048039] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:18.092 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.350 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:17:18.350 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:18.350 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:17:18.350 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:17:18.350 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:18.350 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:18.350 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:18.350 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:18.350 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:18.350 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:18.350 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:18.350 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:18.350 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.350 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.350 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.350 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:17:18.350 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:17:18.350 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:18.350 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:18.350 16:32:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:18.350 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.350 16:32:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.285 [2024-07-21 16:32:37.384604] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:19.285 [2024-07-21 16:32:37.384626] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:19.285 [2024-07-21 16:32:37.384642] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:19.285 [2024-07-21 16:32:37.470727] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:17:19.545 [2024-07-21 16:32:37.530632] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:19.545 [2024-07-21 16:32:37.530687] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.545 2024/07/21 16:32:37 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:17:19.545 request: 00:17:19.545 { 00:17:19.545 "method": "bdev_nvme_start_discovery", 00:17:19.545 "params": { 00:17:19.545 "name": "nvme", 00:17:19.545 "trtype": "tcp", 00:17:19.545 "traddr": "10.0.0.2", 00:17:19.545 "adrfam": "ipv4", 00:17:19.545 "trsvcid": "8009", 00:17:19.545 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:19.545 "wait_for_attach": true 00:17:19.545 } 00:17:19.545 } 00:17:19.545 Got JSON-RPC error response 00:17:19.545 GoRPCClient: error on JSON-RPC call 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.545 2024/07/21 16:32:37 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:17:19.545 request: 00:17:19.545 { 00:17:19.545 "method": "bdev_nvme_start_discovery", 00:17:19.545 "params": { 00:17:19.545 "name": "nvme_second", 00:17:19.545 "trtype": "tcp", 00:17:19.545 "traddr": "10.0.0.2", 00:17:19.545 "adrfam": "ipv4", 00:17:19.545 "trsvcid": "8009", 00:17:19.545 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:19.545 "wait_for_attach": true 00:17:19.545 } 00:17:19.545 } 00:17:19.545 Got JSON-RPC error response 00:17:19.545 GoRPCClient: error on JSON-RPC call 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:19.545 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.804 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.804 16:32:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:19.804 16:32:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:19.804 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:17:19.804 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:19.804 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:19.804 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:19.804 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:19.804 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:19.804 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:19.804 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.804 16:32:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:20.741 [2024-07-21 16:32:38.807890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:20.741 [2024-07-21 16:32:38.807966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf9ae90 with addr=10.0.0.2, port=8010 00:17:20.741 [2024-07-21 16:32:38.807983] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:20.741 [2024-07-21 16:32:38.807991] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:20.741 [2024-07-21 16:32:38.807999] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:17:21.677 [2024-07-21 16:32:39.807882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:21.677 [2024-07-21 16:32:39.807954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf9ae90 with addr=10.0.0.2, port=8010 00:17:21.677 [2024-07-21 16:32:39.807970] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:21.677 [2024-07-21 16:32:39.807978] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:21.677 [2024-07-21 16:32:39.807985] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:17:22.657 [2024-07-21 16:32:40.807817] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:17:22.657 2024/07/21 16:32:40 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:17:22.657 request: 00:17:22.657 { 00:17:22.657 "method": "bdev_nvme_start_discovery", 00:17:22.657 "params": { 00:17:22.657 "name": "nvme_second", 00:17:22.657 "trtype": "tcp", 00:17:22.657 "traddr": "10.0.0.2", 00:17:22.657 "adrfam": "ipv4", 00:17:22.657 "trsvcid": "8010", 00:17:22.657 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:22.657 "wait_for_attach": false, 00:17:22.657 "attach_timeout_ms": 3000 00:17:22.657 } 00:17:22.657 } 00:17:22.657 Got JSON-RPC error response 00:17:22.657 GoRPCClient: error on JSON-RPC call 00:17:22.657 16:32:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:22.657 16:32:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:17:22.657 16:32:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:22.657 16:32:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:22.657 16:32:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:22.657 16:32:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:17:22.657 16:32:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:22.657 16:32:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:22.657 16:32:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:22.657 16:32:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.657 16:32:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:22.657 16:32:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:22.657 16:32:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.923 16:32:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:17:22.923 16:32:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:17:22.923 16:32:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 88848 00:17:22.923 16:32:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:17:22.923 16:32:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:22.923 16:32:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:17:22.923 16:32:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:22.923 16:32:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:17:22.923 16:32:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:22.923 16:32:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:22.923 rmmod nvme_tcp 00:17:22.923 rmmod nvme_fabrics 00:17:22.923 rmmod nvme_keyring 00:17:22.923 16:32:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:22.923 16:32:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:17:22.923 16:32:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:17:22.923 16:32:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 88798 ']' 00:17:22.923 16:32:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 88798 00:17:22.923 16:32:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 88798 ']' 00:17:22.923 16:32:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 88798 00:17:22.923 16:32:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:17:22.923 16:32:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:22.923 16:32:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88798 00:17:22.923 16:32:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:22.923 16:32:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:22.923 16:32:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88798' 00:17:22.923 killing process with pid 88798 00:17:22.923 16:32:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 88798 00:17:22.923 16:32:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 88798 00:17:23.182 16:32:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:23.182 16:32:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:23.182 16:32:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:23.182 16:32:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:23.182 16:32:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:23.182 16:32:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.182 16:32:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:23.182 16:32:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.182 16:32:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:23.182 00:17:23.182 real 0m11.317s 00:17:23.182 user 0m22.315s 00:17:23.182 sys 0m1.757s 00:17:23.182 16:32:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:23.182 16:32:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.182 ************************************ 00:17:23.182 END TEST nvmf_host_discovery 00:17:23.182 ************************************ 00:17:23.440 16:32:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:23.440 16:32:41 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:23.440 16:32:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:23.440 16:32:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:23.440 16:32:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:23.440 ************************************ 00:17:23.440 START TEST nvmf_host_multipath_status 00:17:23.440 ************************************ 00:17:23.440 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:23.440 * Looking for test storage... 00:17:23.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:23.440 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:23.440 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:17:23.440 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:23.440 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:23.440 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:23.440 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:23.441 Cannot find device "nvmf_tgt_br" 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:23.441 Cannot find device "nvmf_tgt_br2" 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:23.441 Cannot find device "nvmf_tgt_br" 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:23.441 Cannot find device "nvmf_tgt_br2" 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:23.441 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:23.700 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:23.700 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:23.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:23.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:17:23.700 00:17:23.700 --- 10.0.0.2 ping statistics --- 00:17:23.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.700 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:23.700 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:23.700 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:17:23.700 00:17:23.700 --- 10.0.0.3 ping statistics --- 00:17:23.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.700 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:23.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:23.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:17:23.700 00:17:23.700 --- 10.0.0.1 ping statistics --- 00:17:23.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.700 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:23.700 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:23.701 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=89332 00:17:23.701 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 89332 00:17:23.701 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:23.701 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 89332 ']' 00:17:23.701 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.701 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:23.701 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.701 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:23.701 16:32:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:23.959 [2024-07-21 16:32:41.945047] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:17:23.959 [2024-07-21 16:32:41.945156] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.959 [2024-07-21 16:32:42.083627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:24.218 [2024-07-21 16:32:42.171916] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.218 [2024-07-21 16:32:42.171985] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.218 [2024-07-21 16:32:42.171996] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:24.218 [2024-07-21 16:32:42.172003] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:24.218 [2024-07-21 16:32:42.172009] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.218 [2024-07-21 16:32:42.172172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.218 [2024-07-21 16:32:42.172180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.785 16:32:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:24.785 16:32:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:17:24.785 16:32:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:24.785 16:32:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:24.785 16:32:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:24.785 16:32:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.785 16:32:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=89332 00:17:24.785 16:32:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:25.044 [2024-07-21 16:32:43.195988] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:25.044 16:32:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:25.609 Malloc0 00:17:25.609 16:32:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:25.609 16:32:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:25.866 16:32:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:26.125 [2024-07-21 16:32:44.238782] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.125 16:32:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:26.383 [2024-07-21 16:32:44.458818] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:26.383 16:32:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=89431 00:17:26.383 16:32:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:26.383 16:32:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:26.383 16:32:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 89431 /var/tmp/bdevperf.sock 00:17:26.383 16:32:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 89431 ']' 00:17:26.383 16:32:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:26.383 16:32:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:26.383 16:32:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:26.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:26.383 16:32:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:26.383 16:32:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:27.316 16:32:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:27.316 16:32:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:17:27.316 16:32:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:27.607 16:32:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:17:27.865 Nvme0n1 00:17:27.865 16:32:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:28.431 Nvme0n1 00:17:28.431 16:32:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:28.431 16:32:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:17:30.339 16:32:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:17:30.339 16:32:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:17:30.596 16:32:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:30.855 16:32:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:17:31.788 16:32:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:17:31.788 16:32:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:31.788 16:32:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:31.788 16:32:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:32.046 16:32:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:32.046 16:32:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:32.046 16:32:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:32.046 16:32:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:32.303 16:32:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:32.303 16:32:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:32.303 16:32:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:32.303 16:32:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:32.561 16:32:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:32.561 16:32:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:32.561 16:32:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:32.561 16:32:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:32.834 16:32:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:32.834 16:32:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:32.834 16:32:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:32.834 16:32:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:33.090 16:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:33.090 16:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:33.090 16:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:33.090 16:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:33.346 16:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:33.346 16:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:17:33.346 16:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:33.603 16:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:33.861 16:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:17:34.797 16:32:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:17:34.797 16:32:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:34.797 16:32:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:34.797 16:32:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:35.065 16:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:35.065 16:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:35.065 16:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:35.065 16:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:35.322 16:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:35.322 16:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:35.322 16:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:35.322 16:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:35.580 16:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:35.580 16:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:35.580 16:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:35.580 16:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:35.837 16:32:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:35.837 16:32:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:35.837 16:32:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:35.837 16:32:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:36.402 16:32:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:36.402 16:32:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:36.402 16:32:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:36.402 16:32:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:36.402 16:32:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:36.402 16:32:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:17:36.402 16:32:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:36.660 16:32:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:17:36.917 16:32:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:17:38.290 16:32:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:17:38.290 16:32:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:38.290 16:32:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:38.290 16:32:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:38.290 16:32:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:38.290 16:32:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:38.290 16:32:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:38.290 16:32:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:38.548 16:32:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:38.548 16:32:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:38.548 16:32:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:38.548 16:32:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:38.807 16:32:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:38.807 16:32:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:38.807 16:32:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:38.807 16:32:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:39.065 16:32:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:39.065 16:32:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:39.065 16:32:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:39.065 16:32:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:39.324 16:32:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:39.324 16:32:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:39.324 16:32:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:39.324 16:32:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:39.582 16:32:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:39.582 16:32:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:17:39.582 16:32:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:39.841 16:32:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:40.099 16:32:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:17:41.033 16:32:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:17:41.033 16:32:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:41.033 16:32:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:41.033 16:32:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:41.292 16:32:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:41.292 16:32:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:41.292 16:32:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:41.292 16:32:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:41.550 16:32:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:41.550 16:32:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:41.550 16:32:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:41.550 16:32:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:41.808 16:32:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:41.808 16:32:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:41.808 16:32:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:41.808 16:32:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:42.066 16:33:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:42.066 16:33:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:42.066 16:33:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:42.066 16:33:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:42.323 16:33:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:42.323 16:33:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:42.323 16:33:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:42.323 16:33:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:42.580 16:33:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:42.580 16:33:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:17:42.580 16:33:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:42.837 16:33:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:43.093 16:33:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:17:44.025 16:33:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:17:44.025 16:33:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:44.025 16:33:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:44.025 16:33:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:44.283 16:33:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:44.283 16:33:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:44.283 16:33:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:44.283 16:33:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:44.572 16:33:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:44.572 16:33:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:44.572 16:33:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:44.572 16:33:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:44.829 16:33:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:44.829 16:33:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:44.829 16:33:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:44.829 16:33:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:45.086 16:33:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:45.086 16:33:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:45.086 16:33:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:45.086 16:33:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:45.346 16:33:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:45.346 16:33:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:45.346 16:33:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:45.346 16:33:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:45.603 16:33:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:45.603 16:33:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:17:45.603 16:33:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:45.861 16:33:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:46.119 16:33:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:17:47.050 16:33:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:17:47.050 16:33:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:47.050 16:33:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:47.050 16:33:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:47.308 16:33:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:47.308 16:33:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:47.308 16:33:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:47.308 16:33:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:47.565 16:33:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:47.565 16:33:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:47.565 16:33:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:47.565 16:33:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:47.823 16:33:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:47.823 16:33:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:47.823 16:33:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:47.823 16:33:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:48.081 16:33:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:48.081 16:33:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:48.081 16:33:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:48.081 16:33:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:48.338 16:33:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:48.339 16:33:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:48.339 16:33:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:48.339 16:33:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:48.601 16:33:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:48.601 16:33:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:17:48.885 16:33:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:17:48.885 16:33:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:17:49.148 16:33:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:49.407 16:33:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:17:50.342 16:33:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:17:50.342 16:33:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:50.342 16:33:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:50.601 16:33:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:50.859 16:33:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:50.859 16:33:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:50.859 16:33:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:50.859 16:33:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:51.118 16:33:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:51.118 16:33:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:51.118 16:33:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:51.118 16:33:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:51.377 16:33:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:51.377 16:33:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:51.377 16:33:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:51.377 16:33:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:51.635 16:33:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:51.635 16:33:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:51.635 16:33:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:51.635 16:33:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:51.893 16:33:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:51.893 16:33:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:51.893 16:33:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:51.893 16:33:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:52.150 16:33:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:52.150 16:33:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:17:52.150 16:33:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:52.408 16:33:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:52.408 16:33:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:17:53.779 16:33:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:17:53.779 16:33:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:53.779 16:33:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:53.779 16:33:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:53.779 16:33:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:53.779 16:33:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:53.779 16:33:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:53.779 16:33:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:54.036 16:33:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:54.036 16:33:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:54.036 16:33:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:54.036 16:33:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:54.292 16:33:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:54.292 16:33:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:54.292 16:33:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:54.292 16:33:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:54.548 16:33:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:54.548 16:33:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:54.548 16:33:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:54.548 16:33:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:54.806 16:33:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:54.806 16:33:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:54.806 16:33:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:54.806 16:33:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:55.063 16:33:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:55.063 16:33:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:17:55.063 16:33:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:55.321 16:33:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:17:55.588 16:33:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:17:56.520 16:33:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:17:56.520 16:33:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:56.520 16:33:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:56.520 16:33:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:56.778 16:33:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:56.778 16:33:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:56.778 16:33:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:56.778 16:33:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.037 16:33:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:57.037 16:33:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:57.037 16:33:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.037 16:33:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:57.295 16:33:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:57.295 16:33:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:57.295 16:33:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:57.295 16:33:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.553 16:33:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:57.553 16:33:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:57.553 16:33:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.553 16:33:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:57.811 16:33:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:57.811 16:33:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:57.811 16:33:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.811 16:33:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:58.069 16:33:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:58.070 16:33:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:17:58.070 16:33:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:58.327 16:33:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:58.584 16:33:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:17:59.515 16:33:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:17:59.515 16:33:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:59.515 16:33:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:59.515 16:33:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:59.771 16:33:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:59.771 16:33:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:59.771 16:33:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:59.771 16:33:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:00.027 16:33:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:00.027 16:33:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:00.027 16:33:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:00.027 16:33:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:00.284 16:33:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:00.284 16:33:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:00.284 16:33:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:00.284 16:33:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:00.541 16:33:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:00.541 16:33:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:00.541 16:33:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:00.541 16:33:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:00.799 16:33:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:00.799 16:33:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:00.799 16:33:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:00.799 16:33:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:01.057 16:33:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:01.057 16:33:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 89431 00:18:01.057 16:33:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 89431 ']' 00:18:01.057 16:33:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 89431 00:18:01.057 16:33:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:18:01.057 16:33:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:01.314 16:33:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89431 00:18:01.314 killing process with pid 89431 00:18:01.314 16:33:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:01.314 16:33:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:01.314 16:33:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89431' 00:18:01.314 16:33:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 89431 00:18:01.314 16:33:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 89431 00:18:01.314 Connection closed with partial response: 00:18:01.314 00:18:01.314 00:18:01.585 16:33:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 89431 00:18:01.585 16:33:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:01.585 [2024-07-21 16:32:44.530650] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:18:01.585 [2024-07-21 16:32:44.530752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89431 ] 00:18:01.585 [2024-07-21 16:32:44.658768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.585 [2024-07-21 16:32:44.752565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.585 Running I/O for 90 seconds... 00:18:01.585 [2024-07-21 16:33:00.940300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.940401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.940435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.940450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.940468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.940481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.940499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.940511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.940529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.940541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.940559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.940578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.940596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.940609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.940627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.940642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.940661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.940674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.940691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.940704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.940722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.940757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.940777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.940790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.940808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.940820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.940840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.940854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.941898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.941923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.941945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.941959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.941977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.941989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.942968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.942987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.943006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.943019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.943037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.943049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.943067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.943080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.943098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.943111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.943128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.943140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.943158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.943170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.943188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.943200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.943218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.943231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.943249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.943272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.943292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.943307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.943324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.943337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.943355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.943375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.943395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.943408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.943426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.943439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.943457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.943470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.943487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.943500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.943517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.943530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.943547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.585 [2024-07-21 16:33:00.943560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.944888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.585 [2024-07-21 16:33:00.944913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.944936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.585 [2024-07-21 16:33:00.944949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.944968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.585 [2024-07-21 16:33:00.944980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.945001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.585 [2024-07-21 16:33:00.945013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.945031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.585 [2024-07-21 16:33:00.945045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.585 [2024-07-21 16:33:00.945062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.585 [2024-07-21 16:33:00.945075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.945105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.586 [2024-07-21 16:33:00.945119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.945137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.586 [2024-07-21 16:33:00.945150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.945168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.586 [2024-07-21 16:33:00.945180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.945198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.586 [2024-07-21 16:33:00.945211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.945228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.586 [2024-07-21 16:33:00.945241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.945270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.586 [2024-07-21 16:33:00.945291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.945310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.586 [2024-07-21 16:33:00.945323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.945342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.586 [2024-07-21 16:33:00.945355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.945373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.586 [2024-07-21 16:33:00.945385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.945403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.945415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.945433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.945445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.945463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.945475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.945500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.945514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.945531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.945544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.945561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.945573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.945591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.945603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.945621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.945633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.945651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.945663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.945681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.945693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.945710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.945723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.945748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.945761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.945779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.945792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.945809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.945821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.945839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.945851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.945868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.945897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.945915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.945928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.945947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.945959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.945976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.945988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.946005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.946018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.946035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.946048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.946065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.946077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.946094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.946107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.946124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.946136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.946153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.946165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.946182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.946194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.946211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.946224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.946244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.946285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.946307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.946320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.946338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.946353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.947982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.947995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.948013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.948031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:01.586 [2024-07-21 16:33:00.948049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.586 [2024-07-21 16:33:00.948061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.948078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.948091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.948108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.948131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.948149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.948163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.948180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.948192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.948209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.948222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.948239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.948251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.948281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.948295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.948312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.948325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.948342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.948354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.948371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.948383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.948400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.948412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.948429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.948441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.948461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.948473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.948491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.948510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.948528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.948546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.948564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.948577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.948594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.948606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.948624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.948636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.948654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.948666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.948683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.948696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.948713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.948726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.948743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.948755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.948772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.948784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.948802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.948814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.949433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.949455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.949477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.949492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.949520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.949533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.949551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.949564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.949583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.949596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.949613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.949625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.949642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.949660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.949682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.949695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.949712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.949724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.949742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.949754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.949771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.949784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.949801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.949813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.949831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.949843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.949860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.949873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.949897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.949910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.949928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.949940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.949958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.961413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.961461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.961479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.961497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.961510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.961527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.961540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.961559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.961571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.961589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.961601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.961618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.961631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.961654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.961667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.961693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.961706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.961723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.587 [2024-07-21 16:33:00.961735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.961753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.587 [2024-07-21 16:33:00.961778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.961797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.587 [2024-07-21 16:33:00.961810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.961828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.587 [2024-07-21 16:33:00.961841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.961858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.587 [2024-07-21 16:33:00.961871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.961888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.587 [2024-07-21 16:33:00.961900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.961917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.587 [2024-07-21 16:33:00.961930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.961948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.587 [2024-07-21 16:33:00.961960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.961977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:60832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.587 [2024-07-21 16:33:00.961989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:01.587 [2024-07-21 16:33:00.962007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.588 [2024-07-21 16:33:00.962019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.588 [2024-07-21 16:33:00.962049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.588 [2024-07-21 16:33:00.962079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.588 [2024-07-21 16:33:00.962110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.588 [2024-07-21 16:33:00.962147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.588 [2024-07-21 16:33:00.962179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.588 [2024-07-21 16:33:00.962209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.962239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.962313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.962346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.962376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.962406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.962437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.962467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.962498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.962528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.962559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.962599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.962629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.962659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.962689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.962719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.962749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.962779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.962809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.962839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.962869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.962900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.962931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.962967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.962986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.962998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.963016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.963028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.963046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.963059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.963077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.963089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.963106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.963130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.963149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.963161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.963978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.964971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.964988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.965000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.965018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.965031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.965049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.965062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.965079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.965092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.965109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.965122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.965139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.965151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.965169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.588 [2024-07-21 16:33:00.965181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:01.588 [2024-07-21 16:33:00.965198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.965211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.965229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.965242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.965270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.965285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.965304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.965323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.965348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.965362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.965380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.965393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.965411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.965423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.965441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.965453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.965471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.965483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.965501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.965513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.965533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.965546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.965563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.965575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.965593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.965605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.965622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.965635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.965652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.965665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.965682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.965694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.965711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.965730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.965748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.965761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.965778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.965791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.966469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.966494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.966517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.966531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.966549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.966562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.966579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.966592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.966610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.966622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.966640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.966652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.966670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.966683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.966700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.966712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.966729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.966741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.966759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.966781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.966851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.966865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.966882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.966894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.966912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.966924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.966941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.966953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.966970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.966983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.967000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.967012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.967030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.967044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.967062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.967074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.967091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.967104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.967121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.967133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.967150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.967163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.967180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.967192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.967216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.967230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.967248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.967272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.967296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.967313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.967330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.967343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.967360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.589 [2024-07-21 16:33:00.967373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.967390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.589 [2024-07-21 16:33:00.967404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.967422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.589 [2024-07-21 16:33:00.967434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.967452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.589 [2024-07-21 16:33:00.967464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.967482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.589 [2024-07-21 16:33:00.967494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.967511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.589 [2024-07-21 16:33:00.967524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.967541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.589 [2024-07-21 16:33:00.967555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.967573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.589 [2024-07-21 16:33:00.967586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.967614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.589 [2024-07-21 16:33:00.967627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.967645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.589 [2024-07-21 16:33:00.967657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.967679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.589 [2024-07-21 16:33:00.967692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.967709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.589 [2024-07-21 16:33:00.967722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.967739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.589 [2024-07-21 16:33:00.967751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:01.589 [2024-07-21 16:33:00.967768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.589 [2024-07-21 16:33:00.967780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.967798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.590 [2024-07-21 16:33:00.967810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.967827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.590 [2024-07-21 16:33:00.967839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.967857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.967869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.967886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.967899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.967916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.967929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.967946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.967959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.967976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.967995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.968013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.968026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.968047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.968061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.968078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.968091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.968109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.968121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.968138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.968150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.968167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.968179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.968198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.968210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.968227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.968239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.968256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.968288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.968319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.968331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.968349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.968360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.968378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.968397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.968416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.968428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.968446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.968458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.968476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.968488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.968505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.968517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.968534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.968547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.968568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.968581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.968598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.968611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.968635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.968648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.968674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.968686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.968704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.968717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.968734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.968747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.969473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.969496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.969528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.969548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.969565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.969578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.969595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.969608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.969625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.969642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.969660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.969683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.969701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.969713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.969730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.969742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.969759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.969771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.969789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.969801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.969820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.969834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.969851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.969864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.969881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.969894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.969918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.969932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.969949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.969961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.969978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.969990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.970008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.590 [2024-07-21 16:33:00.970020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:01.590 [2024-07-21 16:33:00.970038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.970051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.970068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.970080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.970098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.970110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.970127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.970139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.970156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.970168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.970187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.970199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.970216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.970228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.970245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.970287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.970308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.970329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.970349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.970362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.970380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.970393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.970410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.970422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.970439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.970452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.970469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.970482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.970499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.970511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.970529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.970542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.970560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.970572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.970590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.970601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.970619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.970631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.970649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.970661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.970678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.970696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.970715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.970727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.970745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.978240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.978314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.978333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.978352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.978365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.978383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.978396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.978414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.978426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.978444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.978456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.978473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.978486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.978503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.978516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.978534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.978546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.978563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.978576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.978594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.978607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.978637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.978651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.978668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.978690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.978709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.978721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.978738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.978750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.978768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.978781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.978798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.978811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.978828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.978849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.979549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.979574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.979598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.979612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.979631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.979644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.979662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.979675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.979693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.979705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.979734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.979748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.979766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.979779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.979797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.979809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.979826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.979839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.979856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.979869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.979886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.979899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.979916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.979929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.979946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.979958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.979977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.979990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.980007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.980019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.980037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.980050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.980067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.980079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.980096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.980115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.980134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.980147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.980164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.980176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.980193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.980206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.980224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.980237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.980254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.980281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.980301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.980314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.980331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.980343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.980361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.980373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.980391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.980403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.980421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.591 [2024-07-21 16:33:00.980433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.980450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.591 [2024-07-21 16:33:00.980463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.980481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.591 [2024-07-21 16:33:00.980500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.980519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.591 [2024-07-21 16:33:00.980532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.980550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.591 [2024-07-21 16:33:00.980562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.980579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.591 [2024-07-21 16:33:00.980592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.980609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.591 [2024-07-21 16:33:00.980622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.980641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.591 [2024-07-21 16:33:00.980653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.980671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.591 [2024-07-21 16:33:00.980682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.980700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.591 [2024-07-21 16:33:00.980712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.980730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.591 [2024-07-21 16:33:00.980742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.980759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.591 [2024-07-21 16:33:00.980771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:01.591 [2024-07-21 16:33:00.980788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.592 [2024-07-21 16:33:00.980801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.980818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.592 [2024-07-21 16:33:00.980830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.980848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.592 [2024-07-21 16:33:00.980860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.980885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.592 [2024-07-21 16:33:00.980898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.980916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.980928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.980946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.980959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.980977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.980989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.981007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.981019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.981036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.981048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.981066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.981078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.981095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.981108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.981125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.981137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.981154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.981166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.981183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.981196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.981213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.981225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.981248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.981285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.981305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.981318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.981336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.981348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.981366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.981378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.981396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.981408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.981426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.981439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.981456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.981469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.981486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.981510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.981527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.981539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.981556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.981569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.981586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.981598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.981615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.981628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.981645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.981664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.981690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.981708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.981726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.981738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.981755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.981768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.982488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.982511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.982534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.982548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.982566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.982579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.982597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.982609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.982626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.982639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.982657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.982669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.982692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.982705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.982723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.982736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.982753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.982775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.982794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.982807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.982824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.982836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.982854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.982866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.982884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.982896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.982913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.982926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.982943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.982955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.982972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.982985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.983981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.983994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.984011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.984024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.984041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.984053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.984071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.984083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.984101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.984113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.984130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.984142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.984159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.984171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.984189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.984201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.984218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.984230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.984248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.984270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.984291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.984304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.984949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.592 [2024-07-21 16:33:00.984983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:01.592 [2024-07-21 16:33:00.985007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.985027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.985058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.985088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.985118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.985147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.985176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.985205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.985234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.985277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.985311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.985340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.985368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.985407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.985436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.985465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.985494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.985528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.985559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.985590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.985619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.985649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.985678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.985707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.985736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.985775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.985804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.985837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.985865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.593 [2024-07-21 16:33:00.985894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.593 [2024-07-21 16:33:00.985922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.593 [2024-07-21 16:33:00.985950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.593 [2024-07-21 16:33:00.985978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.985995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.593 [2024-07-21 16:33:00.986011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.986028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.593 [2024-07-21 16:33:00.986040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.986057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.593 [2024-07-21 16:33:00.986068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.986085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:60832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.593 [2024-07-21 16:33:00.986097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.986114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.593 [2024-07-21 16:33:00.986132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.986149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.593 [2024-07-21 16:33:00.986161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.986178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.593 [2024-07-21 16:33:00.986190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.986206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.593 [2024-07-21 16:33:00.986218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.986239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.593 [2024-07-21 16:33:00.986283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.986305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.593 [2024-07-21 16:33:00.986318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.986336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.593 [2024-07-21 16:33:00.986347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.986364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.986376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.986394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.986406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.986422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.986434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.986451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.986463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.986480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.986492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.986509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.986534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.986553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.986565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.986582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.986594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.986612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.986624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.986641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.986653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.986670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.986682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.986699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.986710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.986727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.994244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.994320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.994339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.994358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.994372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.994390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.994402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.994419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.994432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.994450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.994462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.994492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.994506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.994524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.994537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.994554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.994566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.994583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.994597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.994614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.994626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.994651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.994663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.994681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.994693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.994711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.994723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.995491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.995517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.995541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.995555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.995573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.995586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.995604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.995617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.995645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.995659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.995677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.995690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.995707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.995720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.995738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.995750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.995768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.995780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.995798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.995811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.995828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.995841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.995858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.995871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.995889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.995901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.995918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.995930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.995947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.995960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.995977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.995989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.996007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.996026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.996045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.996057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.996074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.996087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.996105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.996118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.996135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.996147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.996165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.996178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.996195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.996207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.996225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.996237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.996255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.996282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.996303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.996316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.996333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.996345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.996363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.996375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.996393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.996412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:01.593 [2024-07-21 16:33:00.996431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.593 [2024-07-21 16:33:00.996444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.996461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.996475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.996493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.996505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.996523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.996535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.996552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.996564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.996582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.996594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.996612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.996624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.996642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.996654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.996671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.996684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.996701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.996713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.996730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.996742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.996760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.996772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.996796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.996809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.996826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.996838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.996856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.996868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.996885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.996897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.996915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.996927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.996945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.996958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.996975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.996987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.997005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.997017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.997034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.997046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.997064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.997077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.997095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.997107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.997124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.997137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.997160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.997173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.997191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.997203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.997220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.997232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.997250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.997274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.997893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.997915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.997936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.997951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.997969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.997982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.998012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.998043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.998073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.998103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.998134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.998173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.998207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.998236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.998291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.998324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.998354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.998384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.998414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.998443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.998472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.998502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.998532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.998571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.998604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.998642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.998671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.998701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.998731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.998761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.998792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.998821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.998851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.594 [2024-07-21 16:33:00.998881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.594 [2024-07-21 16:33:00.998912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.594 [2024-07-21 16:33:00.998941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:60800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.594 [2024-07-21 16:33:00.998979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.998996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.594 [2024-07-21 16:33:00.999008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.999026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.594 [2024-07-21 16:33:00.999038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.999056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.594 [2024-07-21 16:33:00.999068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.999086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.594 [2024-07-21 16:33:00.999099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.999117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:60840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.594 [2024-07-21 16:33:00.999129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.999146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.594 [2024-07-21 16:33:00.999159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.999176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.594 [2024-07-21 16:33:00.999188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.999206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.594 [2024-07-21 16:33:00.999218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.999235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.594 [2024-07-21 16:33:00.999247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.999277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.594 [2024-07-21 16:33:00.999293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.999311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.594 [2024-07-21 16:33:00.999324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.999344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.999361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.999379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.999391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.999409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.999421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.999439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.999451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.999468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.999481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.999498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.999510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.999528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.999540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.999558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.999570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.999588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.999601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.999618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.999631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.999648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.999660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.999678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.999691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.999708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.999729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.999748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.999761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.999778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.999791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.999809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.999821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.999839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.999851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.999869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.999881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:01.594 [2024-07-21 16:33:00.999898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.594 [2024-07-21 16:33:00.999911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:00.999929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:00.999941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:00.999959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:00.999971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:00.999988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.000001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.000019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.000031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.000049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.000061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.000083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.000102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.000822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.000846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.000869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.000883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.000901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.000913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.000930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.000943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.000961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.000973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.000991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.001978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.001997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.002010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.002027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.002040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.002057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.002070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.002087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.002099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.002116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.002128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.002145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.002158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.002175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.002187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.002204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.002216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.002233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.002245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.002284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.002299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.002318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.002332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.002350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.002369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.002389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.002401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.002419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.002431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.002448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.002461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.002478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.002490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.002508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.002521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.002538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.002550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.002568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.002580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.002598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.002610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.003222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.003243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.003280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.003298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.003316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.003329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.003346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.003359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.003388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.003401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.003419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.003431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.003449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.003462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.003479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.003492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.003509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.003521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.003539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.003551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.003569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.003581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.003599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.003611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.003629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.003642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.003660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.003672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.003690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.003702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.003720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.003732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.003755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.003768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.003786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.003798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.003816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.003828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.003846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.003859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.003876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.003889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.003906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.003918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.003936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.003949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.003967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.003980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.003998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.004010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.004028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.004040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.004057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.004069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.004087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.004099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.004116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.004135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.004153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.004166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.004183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.595 [2024-07-21 16:33:01.004195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.004213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.595 [2024-07-21 16:33:01.004225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.004243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.595 [2024-07-21 16:33:01.004256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.004288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.595 [2024-07-21 16:33:01.004302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:01.595 [2024-07-21 16:33:01.004320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.596 [2024-07-21 16:33:01.004333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.004351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.596 [2024-07-21 16:33:01.004363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.004381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.596 [2024-07-21 16:33:01.004393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.004410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.596 [2024-07-21 16:33:01.004430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.004448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.596 [2024-07-21 16:33:01.004461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.004478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.596 [2024-07-21 16:33:01.004491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.004508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.596 [2024-07-21 16:33:01.004534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.004552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.596 [2024-07-21 16:33:01.004565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.004582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.596 [2024-07-21 16:33:01.004595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.004612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.596 [2024-07-21 16:33:01.004624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.004642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.596 [2024-07-21 16:33:01.004654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.004671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.596 [2024-07-21 16:33:01.004684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.004701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.004713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.004730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.004742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.004759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.004772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.004789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.004802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.004819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.004831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.004849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.004861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.004878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.004890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.004915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.004934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.004952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.004965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.004982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.004994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.005011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.005023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.005041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.005053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.005070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.005082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.005099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.005111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.005129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.005141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.005157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.005170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.005187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.005199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.005217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.005229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.005246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.005267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.005293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.005307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.005324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.005337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.005353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.005365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.005382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.005394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.005418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.005436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.005672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.005695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.005734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.005752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.005774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.005787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.005810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.005822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.005844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.005857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.005879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.005891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.005913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.005926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.005947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.005972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.005996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.006010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.006032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.006045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.006067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.006079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.006101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.006114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.006135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.006148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.006170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.006183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.006204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.006217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.006240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.006290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.006319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.006332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.006355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.006368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.006400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.006417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.006439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.006460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.006483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.006496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.006518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.006531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.006553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.006565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.006587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.013423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.013545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.013601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.013625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.013639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.013661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.013680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.013702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.013715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.013737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.013750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.013771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.013784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.013805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.013818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.013840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.013853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.013890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.013904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.013926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.013939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.013960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.013973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.013994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.014007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.014029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.014041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.014063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.014076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.014097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.014110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.014132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.014144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.014166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.014179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.014200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.014213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.014235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.014248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.014302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.014317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.014348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.014362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.014383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.014396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.014418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.014430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.014453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.014466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.014488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.014501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.014523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.014536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.014557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.014570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.014592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.014605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.014635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.014648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.014678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.014690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:01.596 [2024-07-21 16:33:01.014711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.596 [2024-07-21 16:33:01.014724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:01.014746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.597 [2024-07-21 16:33:01.014759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:01.014781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.597 [2024-07-21 16:33:01.014801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:01.014956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.597 [2024-07-21 16:33:01.014978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.607103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.597 [2024-07-21 16:33:16.607193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.607246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.597 [2024-07-21 16:33:16.607275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.607301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.597 [2024-07-21 16:33:16.607314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.607333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.597 [2024-07-21 16:33:16.607346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.607364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.597 [2024-07-21 16:33:16.607376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.607393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.597 [2024-07-21 16:33:16.607406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.607425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.597 [2024-07-21 16:33:16.607437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.607456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.597 [2024-07-21 16:33:16.607469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.607487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.597 [2024-07-21 16:33:16.607499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.607517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.597 [2024-07-21 16:33:16.607531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.607548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.597 [2024-07-21 16:33:16.607598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.607619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.597 [2024-07-21 16:33:16.607632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.607656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.597 [2024-07-21 16:33:16.607668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.607686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.597 [2024-07-21 16:33:16.607698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.607780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.597 [2024-07-21 16:33:16.607799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.607818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.597 [2024-07-21 16:33:16.607831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.607849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.597 [2024-07-21 16:33:16.607861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.607879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.597 [2024-07-21 16:33:16.607892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.608122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.597 [2024-07-21 16:33:16.608143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.608165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.597 [2024-07-21 16:33:16.608178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.608196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.597 [2024-07-21 16:33:16.608209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.608226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.597 [2024-07-21 16:33:16.608238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.608279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.597 [2024-07-21 16:33:16.608295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.608326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.597 [2024-07-21 16:33:16.608340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.608357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.597 [2024-07-21 16:33:16.608370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.608387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.597 [2024-07-21 16:33:16.608400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.608417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.597 [2024-07-21 16:33:16.608429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.608447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.597 [2024-07-21 16:33:16.608459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.608477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.597 [2024-07-21 16:33:16.608489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.608509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.597 [2024-07-21 16:33:16.608522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.608539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.597 [2024-07-21 16:33:16.608552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.608569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.597 [2024-07-21 16:33:16.608582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.608600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.597 [2024-07-21 16:33:16.608613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.608791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.597 [2024-07-21 16:33:16.608811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.608833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.597 [2024-07-21 16:33:16.608847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.608875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.597 [2024-07-21 16:33:16.608888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.608906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.597 [2024-07-21 16:33:16.608918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.608937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.597 [2024-07-21 16:33:16.608949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:01.597 [2024-07-21 16:33:16.608967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.597 [2024-07-21 16:33:16.608980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:01.597 Received shutdown signal, test time was about 32.835356 seconds 00:18:01.597 00:18:01.597 Latency(us) 00:18:01.597 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.597 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:01.597 Verification LBA range: start 0x0 length 0x4000 00:18:01.597 Nvme0n1 : 32.83 8546.68 33.39 0.00 0.00 14950.28 368.64 4087539.90 00:18:01.597 =================================================================================================================== 00:18:01.597 Total : 8546.68 33.39 0.00 0.00 14950.28 368.64 4087539.90 00:18:01.597 16:33:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:01.864 16:33:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:18:01.864 16:33:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:01.864 16:33:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:18:01.864 16:33:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:01.864 16:33:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:18:01.864 16:33:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:01.864 16:33:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:18:01.864 16:33:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:01.864 16:33:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:01.864 rmmod nvme_tcp 00:18:01.864 rmmod nvme_fabrics 00:18:01.864 rmmod nvme_keyring 00:18:01.864 16:33:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:01.864 16:33:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:18:01.864 16:33:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:18:01.864 16:33:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 89332 ']' 00:18:01.864 16:33:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 89332 00:18:01.864 16:33:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 89332 ']' 00:18:01.864 16:33:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 89332 00:18:01.864 16:33:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:18:01.864 16:33:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:01.864 16:33:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89332 00:18:01.864 killing process with pid 89332 00:18:01.864 16:33:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:01.864 16:33:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:01.864 16:33:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89332' 00:18:01.864 16:33:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 89332 00:18:01.864 16:33:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 89332 00:18:02.120 16:33:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:02.120 16:33:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:02.120 16:33:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:02.120 16:33:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:02.120 16:33:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:02.120 16:33:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.120 16:33:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:02.120 16:33:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.120 16:33:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:02.120 00:18:02.120 real 0m38.910s 00:18:02.120 user 2m6.110s 00:18:02.120 sys 0m9.863s 00:18:02.120 16:33:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:02.120 16:33:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:02.120 ************************************ 00:18:02.120 END TEST nvmf_host_multipath_status 00:18:02.120 ************************************ 00:18:02.377 16:33:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:02.377 16:33:20 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:02.377 16:33:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:02.377 16:33:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:02.377 16:33:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:02.377 ************************************ 00:18:02.377 START TEST nvmf_discovery_remove_ifc 00:18:02.377 ************************************ 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:02.377 * Looking for test storage... 00:18:02.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:02.377 Cannot find device "nvmf_tgt_br" 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:02.377 Cannot find device "nvmf_tgt_br2" 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:02.377 Cannot find device "nvmf_tgt_br" 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:02.377 Cannot find device "nvmf_tgt_br2" 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:02.377 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:02.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:02.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:02.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:02.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:18:02.635 00:18:02.635 --- 10.0.0.2 ping statistics --- 00:18:02.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.635 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:02.635 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:02.635 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:18:02.635 00:18:02.635 --- 10.0.0.3 ping statistics --- 00:18:02.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.635 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:02.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:02.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:02.635 00:18:02.635 --- 10.0.0.1 ping statistics --- 00:18:02.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.635 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=90724 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 90724 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 90724 ']' 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:02.635 16:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:02.892 [2024-07-21 16:33:20.884012] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:18:02.892 [2024-07-21 16:33:20.884141] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.892 [2024-07-21 16:33:21.023139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.149 [2024-07-21 16:33:21.111745] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.149 [2024-07-21 16:33:21.111812] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.149 [2024-07-21 16:33:21.111822] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.149 [2024-07-21 16:33:21.111829] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.149 [2024-07-21 16:33:21.111835] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.149 [2024-07-21 16:33:21.111868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.713 16:33:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:03.713 16:33:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:18:03.713 16:33:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:03.713 16:33:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:03.713 16:33:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:03.713 16:33:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.713 16:33:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:18:03.713 16:33:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.713 16:33:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:03.713 [2024-07-21 16:33:21.879214] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:03.713 [2024-07-21 16:33:21.887397] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:03.713 null0 00:18:03.713 [2024-07-21 16:33:21.919248] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.970 16:33:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.970 16:33:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=90777 00:18:03.970 16:33:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:18:03.970 16:33:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 90777 /tmp/host.sock 00:18:03.970 16:33:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 90777 ']' 00:18:03.970 16:33:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:18:03.970 16:33:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:03.970 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:03.970 16:33:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:03.970 16:33:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:03.970 16:33:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:03.970 [2024-07-21 16:33:22.001987] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:18:03.970 [2024-07-21 16:33:22.002106] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90777 ] 00:18:03.970 [2024-07-21 16:33:22.141444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.227 [2024-07-21 16:33:22.252574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.792 16:33:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.792 16:33:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:18:04.792 16:33:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:04.792 16:33:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:18:04.792 16:33:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.792 16:33:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:04.792 16:33:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.792 16:33:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:18:04.792 16:33:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.792 16:33:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:05.050 16:33:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.050 16:33:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:18:05.050 16:33:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.050 16:33:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:05.981 [2024-07-21 16:33:24.082351] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:05.981 [2024-07-21 16:33:24.082414] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:05.981 [2024-07-21 16:33:24.082434] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:05.981 [2024-07-21 16:33:24.168451] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:06.240 [2024-07-21 16:33:24.225298] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:06.240 [2024-07-21 16:33:24.225392] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:06.240 [2024-07-21 16:33:24.225421] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:06.240 [2024-07-21 16:33:24.225437] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:06.240 [2024-07-21 16:33:24.225462] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:06.240 16:33:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.240 16:33:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:18:06.240 16:33:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:06.240 [2024-07-21 16:33:24.230824] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1f60650 was disconnected and freed. delete nvme_qpair. 00:18:06.240 16:33:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:06.240 16:33:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:06.240 16:33:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.240 16:33:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:06.240 16:33:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:06.240 16:33:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:06.240 16:33:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.240 16:33:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:18:06.240 16:33:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:18:06.240 16:33:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:18:06.240 16:33:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:18:06.240 16:33:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:06.240 16:33:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:06.240 16:33:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:06.240 16:33:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:06.240 16:33:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:06.240 16:33:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.240 16:33:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:06.240 16:33:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.240 16:33:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:06.240 16:33:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:07.172 16:33:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:07.172 16:33:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:07.172 16:33:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:07.172 16:33:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.172 16:33:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:07.172 16:33:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:07.172 16:33:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:07.429 16:33:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.429 16:33:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:07.429 16:33:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:08.359 16:33:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:08.360 16:33:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:08.360 16:33:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.360 16:33:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:08.360 16:33:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:08.360 16:33:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:08.360 16:33:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:08.360 16:33:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.360 16:33:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:08.360 16:33:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:09.292 16:33:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:09.292 16:33:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:09.292 16:33:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:09.292 16:33:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.292 16:33:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:09.292 16:33:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:09.292 16:33:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:09.550 16:33:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.550 16:33:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:09.550 16:33:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:10.484 16:33:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:10.484 16:33:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:10.484 16:33:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.484 16:33:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:10.484 16:33:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:10.484 16:33:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:10.484 16:33:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:10.484 16:33:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.484 16:33:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:10.484 16:33:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:11.417 16:33:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:11.417 16:33:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:11.417 16:33:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.417 16:33:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:11.417 16:33:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:11.417 16:33:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:11.417 16:33:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:11.676 16:33:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.676 [2024-07-21 16:33:29.653436] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:18:11.676 [2024-07-21 16:33:29.653523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.676 [2024-07-21 16:33:29.653540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.676 [2024-07-21 16:33:29.653554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.676 [2024-07-21 16:33:29.653564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.676 [2024-07-21 16:33:29.653574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.676 [2024-07-21 16:33:29.653582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.676 [2024-07-21 16:33:29.653594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.676 [2024-07-21 16:33:29.653603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.676 [2024-07-21 16:33:29.653612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:11.676 [2024-07-21 16:33:29.653621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:11.676 [2024-07-21 16:33:29.653629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f29900 is same with the state(5) to be set 00:18:11.676 16:33:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:11.676 16:33:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:11.676 [2024-07-21 16:33:29.663428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f29900 (9): Bad file descriptor 00:18:11.676 [2024-07-21 16:33:29.673454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:12.610 16:33:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:12.610 16:33:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:12.610 16:33:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.610 16:33:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:12.610 16:33:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:12.610 16:33:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:12.610 16:33:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:12.610 [2024-07-21 16:33:30.688383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:18:12.610 [2024-07-21 16:33:30.688505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f29900 with addr=10.0.0.2, port=4420 00:18:12.610 [2024-07-21 16:33:30.688537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f29900 is same with the state(5) to be set 00:18:12.610 [2024-07-21 16:33:30.688604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f29900 (9): Bad file descriptor 00:18:12.610 [2024-07-21 16:33:30.689433] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:12.610 [2024-07-21 16:33:30.689478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:12.610 [2024-07-21 16:33:30.689496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:12.610 [2024-07-21 16:33:30.689516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:12.610 [2024-07-21 16:33:30.689553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:12.610 [2024-07-21 16:33:30.689572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:12.610 16:33:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.610 16:33:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:12.610 16:33:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:13.543 [2024-07-21 16:33:31.689635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:13.543 [2024-07-21 16:33:31.689698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:13.543 [2024-07-21 16:33:31.689725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:13.543 [2024-07-21 16:33:31.689735] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:18:13.543 [2024-07-21 16:33:31.689772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:13.543 [2024-07-21 16:33:31.689801] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:18:13.543 [2024-07-21 16:33:31.689850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.543 [2024-07-21 16:33:31.689865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.543 [2024-07-21 16:33:31.689878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.543 [2024-07-21 16:33:31.689887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.543 [2024-07-21 16:33:31.689896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.543 [2024-07-21 16:33:31.689904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.543 [2024-07-21 16:33:31.689913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.543 [2024-07-21 16:33:31.689921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.543 [2024-07-21 16:33:31.689930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.543 [2024-07-21 16:33:31.689939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.543 [2024-07-21 16:33:31.689947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:18:13.543 [2024-07-21 16:33:31.690632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ecc3e0 (9): Bad file descriptor 00:18:13.543 [2024-07-21 16:33:31.691627] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:18:13.543 [2024-07-21 16:33:31.691648] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:18:13.543 16:33:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:13.543 16:33:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:13.543 16:33:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.543 16:33:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:13.543 16:33:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:13.543 16:33:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:13.543 16:33:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:13.543 16:33:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.801 16:33:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:18:13.801 16:33:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:13.801 16:33:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:13.801 16:33:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:18:13.802 16:33:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:13.802 16:33:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:13.802 16:33:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:13.802 16:33:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.802 16:33:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:13.802 16:33:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:13.802 16:33:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:13.802 16:33:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.802 16:33:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:13.802 16:33:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:14.735 16:33:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:14.735 16:33:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:14.735 16:33:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.735 16:33:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:14.735 16:33:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:14.735 16:33:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:14.735 16:33:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:14.735 16:33:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.735 16:33:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:14.735 16:33:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:15.680 [2024-07-21 16:33:33.703391] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:15.681 [2024-07-21 16:33:33.703426] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:15.681 [2024-07-21 16:33:33.703447] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:15.681 [2024-07-21 16:33:33.789489] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:18:15.681 [2024-07-21 16:33:33.845658] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:15.681 [2024-07-21 16:33:33.845711] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:15.681 [2024-07-21 16:33:33.845739] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:15.681 [2024-07-21 16:33:33.845757] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:18:15.681 [2024-07-21 16:33:33.845768] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:15.681 [2024-07-21 16:33:33.852031] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1f450f0 was disconnected and freed. delete nvme_qpair. 00:18:15.938 16:33:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:15.938 16:33:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:15.938 16:33:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:15.938 16:33:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.938 16:33:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:15.938 16:33:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:15.938 16:33:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:15.938 16:33:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.938 16:33:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:18:15.938 16:33:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:18:15.938 16:33:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 90777 00:18:15.938 16:33:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 90777 ']' 00:18:15.938 16:33:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 90777 00:18:15.938 16:33:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:18:15.938 16:33:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:15.938 16:33:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90777 00:18:15.938 killing process with pid 90777 00:18:15.938 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:15.938 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:15.938 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90777' 00:18:15.938 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 90777 00:18:15.938 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 90777 00:18:16.195 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:18:16.195 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:16.195 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:18:16.195 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:16.195 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:18:16.195 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:16.195 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:16.195 rmmod nvme_tcp 00:18:16.195 rmmod nvme_fabrics 00:18:16.195 rmmod nvme_keyring 00:18:16.195 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:16.195 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:18:16.195 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:18:16.195 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 90724 ']' 00:18:16.195 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 90724 00:18:16.195 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 90724 ']' 00:18:16.195 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 90724 00:18:16.195 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:18:16.195 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:16.195 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90724 00:18:16.195 killing process with pid 90724 00:18:16.195 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:16.196 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:16.196 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90724' 00:18:16.196 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 90724 00:18:16.196 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 90724 00:18:16.761 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:16.761 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:16.761 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:16.761 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:16.761 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:16.761 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.761 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:16.761 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.761 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:16.761 00:18:16.761 real 0m14.347s 00:18:16.761 user 0m25.627s 00:18:16.761 sys 0m1.665s 00:18:16.761 ************************************ 00:18:16.761 END TEST nvmf_discovery_remove_ifc 00:18:16.761 ************************************ 00:18:16.761 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:16.761 16:33:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:16.761 16:33:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:16.761 16:33:34 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:16.761 16:33:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:16.761 16:33:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:16.761 16:33:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:16.761 ************************************ 00:18:16.761 START TEST nvmf_identify_kernel_target 00:18:16.761 ************************************ 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:16.762 * Looking for test storage... 00:18:16.762 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:16.762 Cannot find device "nvmf_tgt_br" 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:16.762 Cannot find device "nvmf_tgt_br2" 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:16.762 Cannot find device "nvmf_tgt_br" 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:16.762 Cannot find device "nvmf_tgt_br2" 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:18:16.762 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:17.020 16:33:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:17.020 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:17.020 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:17.020 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:18:17.020 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:17.020 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:17.020 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:18:17.020 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:17.020 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:17.020 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:17.020 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:17.020 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:17.020 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:17.020 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:17.020 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:17.020 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:17.020 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:17.020 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:17.020 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:17.020 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:17.020 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:17.020 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:17.020 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:17.020 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:17.020 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:17.020 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:17.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:17.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:18:17.021 00:18:17.021 --- 10.0.0.2 ping statistics --- 00:18:17.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.021 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:17.021 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:17.021 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:18:17.021 00:18:17.021 --- 10.0.0.3 ping statistics --- 00:18:17.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.021 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:17.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:17.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:17.021 00:18:17.021 --- 10.0.0.1 ping statistics --- 00:18:17.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.021 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:18:17.021 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:18:17.278 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:17.278 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:17.536 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:17.536 Waiting for block devices as requested 00:18:17.536 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:17.794 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:17.794 No valid GPT data, bailing 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:17.794 No valid GPT data, bailing 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:18:17.794 16:33:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:18.052 No valid GPT data, bailing 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:18.052 No valid GPT data, bailing 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:18.052 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -a 10.0.0.1 -t tcp -s 4420 00:18:18.052 00:18:18.052 Discovery Log Number of Records 2, Generation counter 2 00:18:18.053 =====Discovery Log Entry 0====== 00:18:18.053 trtype: tcp 00:18:18.053 adrfam: ipv4 00:18:18.053 subtype: current discovery subsystem 00:18:18.053 treq: not specified, sq flow control disable supported 00:18:18.053 portid: 1 00:18:18.053 trsvcid: 4420 00:18:18.053 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:18.053 traddr: 10.0.0.1 00:18:18.053 eflags: none 00:18:18.053 sectype: none 00:18:18.053 =====Discovery Log Entry 1====== 00:18:18.053 trtype: tcp 00:18:18.053 adrfam: ipv4 00:18:18.053 subtype: nvme subsystem 00:18:18.053 treq: not specified, sq flow control disable supported 00:18:18.053 portid: 1 00:18:18.053 trsvcid: 4420 00:18:18.053 subnqn: nqn.2016-06.io.spdk:testnqn 00:18:18.053 traddr: 10.0.0.1 00:18:18.053 eflags: none 00:18:18.053 sectype: none 00:18:18.053 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:18:18.053 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:18:18.312 ===================================================== 00:18:18.312 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:18.312 ===================================================== 00:18:18.312 Controller Capabilities/Features 00:18:18.312 ================================ 00:18:18.312 Vendor ID: 0000 00:18:18.312 Subsystem Vendor ID: 0000 00:18:18.312 Serial Number: 8046015e3e0bc02bac98 00:18:18.312 Model Number: Linux 00:18:18.312 Firmware Version: 6.7.0-68 00:18:18.312 Recommended Arb Burst: 0 00:18:18.312 IEEE OUI Identifier: 00 00 00 00:18:18.312 Multi-path I/O 00:18:18.312 May have multiple subsystem ports: No 00:18:18.312 May have multiple controllers: No 00:18:18.312 Associated with SR-IOV VF: No 00:18:18.312 Max Data Transfer Size: Unlimited 00:18:18.312 Max Number of Namespaces: 0 00:18:18.312 Max Number of I/O Queues: 1024 00:18:18.312 NVMe Specification Version (VS): 1.3 00:18:18.312 NVMe Specification Version (Identify): 1.3 00:18:18.312 Maximum Queue Entries: 1024 00:18:18.312 Contiguous Queues Required: No 00:18:18.312 Arbitration Mechanisms Supported 00:18:18.312 Weighted Round Robin: Not Supported 00:18:18.312 Vendor Specific: Not Supported 00:18:18.312 Reset Timeout: 7500 ms 00:18:18.312 Doorbell Stride: 4 bytes 00:18:18.312 NVM Subsystem Reset: Not Supported 00:18:18.312 Command Sets Supported 00:18:18.312 NVM Command Set: Supported 00:18:18.312 Boot Partition: Not Supported 00:18:18.312 Memory Page Size Minimum: 4096 bytes 00:18:18.312 Memory Page Size Maximum: 4096 bytes 00:18:18.312 Persistent Memory Region: Not Supported 00:18:18.312 Optional Asynchronous Events Supported 00:18:18.312 Namespace Attribute Notices: Not Supported 00:18:18.312 Firmware Activation Notices: Not Supported 00:18:18.312 ANA Change Notices: Not Supported 00:18:18.312 PLE Aggregate Log Change Notices: Not Supported 00:18:18.312 LBA Status Info Alert Notices: Not Supported 00:18:18.312 EGE Aggregate Log Change Notices: Not Supported 00:18:18.312 Normal NVM Subsystem Shutdown event: Not Supported 00:18:18.312 Zone Descriptor Change Notices: Not Supported 00:18:18.312 Discovery Log Change Notices: Supported 00:18:18.312 Controller Attributes 00:18:18.312 128-bit Host Identifier: Not Supported 00:18:18.312 Non-Operational Permissive Mode: Not Supported 00:18:18.312 NVM Sets: Not Supported 00:18:18.312 Read Recovery Levels: Not Supported 00:18:18.312 Endurance Groups: Not Supported 00:18:18.312 Predictable Latency Mode: Not Supported 00:18:18.312 Traffic Based Keep ALive: Not Supported 00:18:18.312 Namespace Granularity: Not Supported 00:18:18.312 SQ Associations: Not Supported 00:18:18.312 UUID List: Not Supported 00:18:18.312 Multi-Domain Subsystem: Not Supported 00:18:18.312 Fixed Capacity Management: Not Supported 00:18:18.312 Variable Capacity Management: Not Supported 00:18:18.312 Delete Endurance Group: Not Supported 00:18:18.312 Delete NVM Set: Not Supported 00:18:18.312 Extended LBA Formats Supported: Not Supported 00:18:18.312 Flexible Data Placement Supported: Not Supported 00:18:18.312 00:18:18.312 Controller Memory Buffer Support 00:18:18.312 ================================ 00:18:18.312 Supported: No 00:18:18.312 00:18:18.312 Persistent Memory Region Support 00:18:18.312 ================================ 00:18:18.312 Supported: No 00:18:18.312 00:18:18.313 Admin Command Set Attributes 00:18:18.313 ============================ 00:18:18.313 Security Send/Receive: Not Supported 00:18:18.313 Format NVM: Not Supported 00:18:18.313 Firmware Activate/Download: Not Supported 00:18:18.313 Namespace Management: Not Supported 00:18:18.313 Device Self-Test: Not Supported 00:18:18.313 Directives: Not Supported 00:18:18.313 NVMe-MI: Not Supported 00:18:18.313 Virtualization Management: Not Supported 00:18:18.313 Doorbell Buffer Config: Not Supported 00:18:18.313 Get LBA Status Capability: Not Supported 00:18:18.313 Command & Feature Lockdown Capability: Not Supported 00:18:18.313 Abort Command Limit: 1 00:18:18.313 Async Event Request Limit: 1 00:18:18.313 Number of Firmware Slots: N/A 00:18:18.313 Firmware Slot 1 Read-Only: N/A 00:18:18.313 Firmware Activation Without Reset: N/A 00:18:18.313 Multiple Update Detection Support: N/A 00:18:18.313 Firmware Update Granularity: No Information Provided 00:18:18.313 Per-Namespace SMART Log: No 00:18:18.313 Asymmetric Namespace Access Log Page: Not Supported 00:18:18.313 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:18.313 Command Effects Log Page: Not Supported 00:18:18.313 Get Log Page Extended Data: Supported 00:18:18.313 Telemetry Log Pages: Not Supported 00:18:18.313 Persistent Event Log Pages: Not Supported 00:18:18.313 Supported Log Pages Log Page: May Support 00:18:18.313 Commands Supported & Effects Log Page: Not Supported 00:18:18.313 Feature Identifiers & Effects Log Page:May Support 00:18:18.313 NVMe-MI Commands & Effects Log Page: May Support 00:18:18.313 Data Area 4 for Telemetry Log: Not Supported 00:18:18.313 Error Log Page Entries Supported: 1 00:18:18.313 Keep Alive: Not Supported 00:18:18.313 00:18:18.313 NVM Command Set Attributes 00:18:18.313 ========================== 00:18:18.313 Submission Queue Entry Size 00:18:18.313 Max: 1 00:18:18.313 Min: 1 00:18:18.313 Completion Queue Entry Size 00:18:18.313 Max: 1 00:18:18.313 Min: 1 00:18:18.313 Number of Namespaces: 0 00:18:18.313 Compare Command: Not Supported 00:18:18.313 Write Uncorrectable Command: Not Supported 00:18:18.313 Dataset Management Command: Not Supported 00:18:18.313 Write Zeroes Command: Not Supported 00:18:18.313 Set Features Save Field: Not Supported 00:18:18.313 Reservations: Not Supported 00:18:18.313 Timestamp: Not Supported 00:18:18.313 Copy: Not Supported 00:18:18.313 Volatile Write Cache: Not Present 00:18:18.313 Atomic Write Unit (Normal): 1 00:18:18.313 Atomic Write Unit (PFail): 1 00:18:18.313 Atomic Compare & Write Unit: 1 00:18:18.313 Fused Compare & Write: Not Supported 00:18:18.313 Scatter-Gather List 00:18:18.313 SGL Command Set: Supported 00:18:18.313 SGL Keyed: Not Supported 00:18:18.313 SGL Bit Bucket Descriptor: Not Supported 00:18:18.313 SGL Metadata Pointer: Not Supported 00:18:18.313 Oversized SGL: Not Supported 00:18:18.313 SGL Metadata Address: Not Supported 00:18:18.313 SGL Offset: Supported 00:18:18.313 Transport SGL Data Block: Not Supported 00:18:18.313 Replay Protected Memory Block: Not Supported 00:18:18.313 00:18:18.313 Firmware Slot Information 00:18:18.313 ========================= 00:18:18.313 Active slot: 0 00:18:18.313 00:18:18.313 00:18:18.313 Error Log 00:18:18.313 ========= 00:18:18.313 00:18:18.313 Active Namespaces 00:18:18.313 ================= 00:18:18.313 Discovery Log Page 00:18:18.313 ================== 00:18:18.313 Generation Counter: 2 00:18:18.313 Number of Records: 2 00:18:18.313 Record Format: 0 00:18:18.313 00:18:18.313 Discovery Log Entry 0 00:18:18.313 ---------------------- 00:18:18.313 Transport Type: 3 (TCP) 00:18:18.313 Address Family: 1 (IPv4) 00:18:18.313 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:18.313 Entry Flags: 00:18:18.313 Duplicate Returned Information: 0 00:18:18.313 Explicit Persistent Connection Support for Discovery: 0 00:18:18.313 Transport Requirements: 00:18:18.313 Secure Channel: Not Specified 00:18:18.313 Port ID: 1 (0x0001) 00:18:18.313 Controller ID: 65535 (0xffff) 00:18:18.313 Admin Max SQ Size: 32 00:18:18.313 Transport Service Identifier: 4420 00:18:18.313 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:18.313 Transport Address: 10.0.0.1 00:18:18.313 Discovery Log Entry 1 00:18:18.313 ---------------------- 00:18:18.313 Transport Type: 3 (TCP) 00:18:18.313 Address Family: 1 (IPv4) 00:18:18.313 Subsystem Type: 2 (NVM Subsystem) 00:18:18.313 Entry Flags: 00:18:18.313 Duplicate Returned Information: 0 00:18:18.313 Explicit Persistent Connection Support for Discovery: 0 00:18:18.313 Transport Requirements: 00:18:18.313 Secure Channel: Not Specified 00:18:18.313 Port ID: 1 (0x0001) 00:18:18.313 Controller ID: 65535 (0xffff) 00:18:18.313 Admin Max SQ Size: 32 00:18:18.313 Transport Service Identifier: 4420 00:18:18.313 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:18:18.313 Transport Address: 10.0.0.1 00:18:18.313 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:18:18.572 get_feature(0x01) failed 00:18:18.572 get_feature(0x02) failed 00:18:18.572 get_feature(0x04) failed 00:18:18.572 ===================================================== 00:18:18.572 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:18:18.572 ===================================================== 00:18:18.572 Controller Capabilities/Features 00:18:18.572 ================================ 00:18:18.572 Vendor ID: 0000 00:18:18.572 Subsystem Vendor ID: 0000 00:18:18.572 Serial Number: bdadebbdbefe0b583f6e 00:18:18.572 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:18:18.572 Firmware Version: 6.7.0-68 00:18:18.572 Recommended Arb Burst: 6 00:18:18.572 IEEE OUI Identifier: 00 00 00 00:18:18.572 Multi-path I/O 00:18:18.572 May have multiple subsystem ports: Yes 00:18:18.572 May have multiple controllers: Yes 00:18:18.572 Associated with SR-IOV VF: No 00:18:18.572 Max Data Transfer Size: Unlimited 00:18:18.572 Max Number of Namespaces: 1024 00:18:18.572 Max Number of I/O Queues: 128 00:18:18.572 NVMe Specification Version (VS): 1.3 00:18:18.572 NVMe Specification Version (Identify): 1.3 00:18:18.572 Maximum Queue Entries: 1024 00:18:18.572 Contiguous Queues Required: No 00:18:18.572 Arbitration Mechanisms Supported 00:18:18.572 Weighted Round Robin: Not Supported 00:18:18.572 Vendor Specific: Not Supported 00:18:18.572 Reset Timeout: 7500 ms 00:18:18.572 Doorbell Stride: 4 bytes 00:18:18.572 NVM Subsystem Reset: Not Supported 00:18:18.572 Command Sets Supported 00:18:18.572 NVM Command Set: Supported 00:18:18.572 Boot Partition: Not Supported 00:18:18.572 Memory Page Size Minimum: 4096 bytes 00:18:18.572 Memory Page Size Maximum: 4096 bytes 00:18:18.572 Persistent Memory Region: Not Supported 00:18:18.572 Optional Asynchronous Events Supported 00:18:18.572 Namespace Attribute Notices: Supported 00:18:18.572 Firmware Activation Notices: Not Supported 00:18:18.572 ANA Change Notices: Supported 00:18:18.572 PLE Aggregate Log Change Notices: Not Supported 00:18:18.572 LBA Status Info Alert Notices: Not Supported 00:18:18.572 EGE Aggregate Log Change Notices: Not Supported 00:18:18.572 Normal NVM Subsystem Shutdown event: Not Supported 00:18:18.572 Zone Descriptor Change Notices: Not Supported 00:18:18.572 Discovery Log Change Notices: Not Supported 00:18:18.572 Controller Attributes 00:18:18.572 128-bit Host Identifier: Supported 00:18:18.572 Non-Operational Permissive Mode: Not Supported 00:18:18.573 NVM Sets: Not Supported 00:18:18.573 Read Recovery Levels: Not Supported 00:18:18.573 Endurance Groups: Not Supported 00:18:18.573 Predictable Latency Mode: Not Supported 00:18:18.573 Traffic Based Keep ALive: Supported 00:18:18.573 Namespace Granularity: Not Supported 00:18:18.573 SQ Associations: Not Supported 00:18:18.573 UUID List: Not Supported 00:18:18.573 Multi-Domain Subsystem: Not Supported 00:18:18.573 Fixed Capacity Management: Not Supported 00:18:18.573 Variable Capacity Management: Not Supported 00:18:18.573 Delete Endurance Group: Not Supported 00:18:18.573 Delete NVM Set: Not Supported 00:18:18.573 Extended LBA Formats Supported: Not Supported 00:18:18.573 Flexible Data Placement Supported: Not Supported 00:18:18.573 00:18:18.573 Controller Memory Buffer Support 00:18:18.573 ================================ 00:18:18.573 Supported: No 00:18:18.573 00:18:18.573 Persistent Memory Region Support 00:18:18.573 ================================ 00:18:18.573 Supported: No 00:18:18.573 00:18:18.573 Admin Command Set Attributes 00:18:18.573 ============================ 00:18:18.573 Security Send/Receive: Not Supported 00:18:18.573 Format NVM: Not Supported 00:18:18.573 Firmware Activate/Download: Not Supported 00:18:18.573 Namespace Management: Not Supported 00:18:18.573 Device Self-Test: Not Supported 00:18:18.573 Directives: Not Supported 00:18:18.573 NVMe-MI: Not Supported 00:18:18.573 Virtualization Management: Not Supported 00:18:18.573 Doorbell Buffer Config: Not Supported 00:18:18.573 Get LBA Status Capability: Not Supported 00:18:18.573 Command & Feature Lockdown Capability: Not Supported 00:18:18.573 Abort Command Limit: 4 00:18:18.573 Async Event Request Limit: 4 00:18:18.573 Number of Firmware Slots: N/A 00:18:18.573 Firmware Slot 1 Read-Only: N/A 00:18:18.573 Firmware Activation Without Reset: N/A 00:18:18.573 Multiple Update Detection Support: N/A 00:18:18.573 Firmware Update Granularity: No Information Provided 00:18:18.573 Per-Namespace SMART Log: Yes 00:18:18.573 Asymmetric Namespace Access Log Page: Supported 00:18:18.573 ANA Transition Time : 10 sec 00:18:18.573 00:18:18.573 Asymmetric Namespace Access Capabilities 00:18:18.573 ANA Optimized State : Supported 00:18:18.573 ANA Non-Optimized State : Supported 00:18:18.573 ANA Inaccessible State : Supported 00:18:18.573 ANA Persistent Loss State : Supported 00:18:18.573 ANA Change State : Supported 00:18:18.573 ANAGRPID is not changed : No 00:18:18.573 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:18:18.573 00:18:18.573 ANA Group Identifier Maximum : 128 00:18:18.573 Number of ANA Group Identifiers : 128 00:18:18.573 Max Number of Allowed Namespaces : 1024 00:18:18.573 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:18:18.573 Command Effects Log Page: Supported 00:18:18.573 Get Log Page Extended Data: Supported 00:18:18.573 Telemetry Log Pages: Not Supported 00:18:18.573 Persistent Event Log Pages: Not Supported 00:18:18.573 Supported Log Pages Log Page: May Support 00:18:18.573 Commands Supported & Effects Log Page: Not Supported 00:18:18.573 Feature Identifiers & Effects Log Page:May Support 00:18:18.573 NVMe-MI Commands & Effects Log Page: May Support 00:18:18.573 Data Area 4 for Telemetry Log: Not Supported 00:18:18.573 Error Log Page Entries Supported: 128 00:18:18.573 Keep Alive: Supported 00:18:18.573 Keep Alive Granularity: 1000 ms 00:18:18.573 00:18:18.573 NVM Command Set Attributes 00:18:18.573 ========================== 00:18:18.573 Submission Queue Entry Size 00:18:18.573 Max: 64 00:18:18.573 Min: 64 00:18:18.573 Completion Queue Entry Size 00:18:18.573 Max: 16 00:18:18.573 Min: 16 00:18:18.573 Number of Namespaces: 1024 00:18:18.573 Compare Command: Not Supported 00:18:18.573 Write Uncorrectable Command: Not Supported 00:18:18.573 Dataset Management Command: Supported 00:18:18.573 Write Zeroes Command: Supported 00:18:18.573 Set Features Save Field: Not Supported 00:18:18.573 Reservations: Not Supported 00:18:18.573 Timestamp: Not Supported 00:18:18.573 Copy: Not Supported 00:18:18.573 Volatile Write Cache: Present 00:18:18.573 Atomic Write Unit (Normal): 1 00:18:18.573 Atomic Write Unit (PFail): 1 00:18:18.573 Atomic Compare & Write Unit: 1 00:18:18.573 Fused Compare & Write: Not Supported 00:18:18.573 Scatter-Gather List 00:18:18.573 SGL Command Set: Supported 00:18:18.573 SGL Keyed: Not Supported 00:18:18.573 SGL Bit Bucket Descriptor: Not Supported 00:18:18.573 SGL Metadata Pointer: Not Supported 00:18:18.573 Oversized SGL: Not Supported 00:18:18.573 SGL Metadata Address: Not Supported 00:18:18.573 SGL Offset: Supported 00:18:18.573 Transport SGL Data Block: Not Supported 00:18:18.573 Replay Protected Memory Block: Not Supported 00:18:18.573 00:18:18.573 Firmware Slot Information 00:18:18.573 ========================= 00:18:18.573 Active slot: 0 00:18:18.573 00:18:18.573 Asymmetric Namespace Access 00:18:18.573 =========================== 00:18:18.573 Change Count : 0 00:18:18.573 Number of ANA Group Descriptors : 1 00:18:18.573 ANA Group Descriptor : 0 00:18:18.573 ANA Group ID : 1 00:18:18.573 Number of NSID Values : 1 00:18:18.573 Change Count : 0 00:18:18.573 ANA State : 1 00:18:18.573 Namespace Identifier : 1 00:18:18.573 00:18:18.573 Commands Supported and Effects 00:18:18.573 ============================== 00:18:18.573 Admin Commands 00:18:18.573 -------------- 00:18:18.573 Get Log Page (02h): Supported 00:18:18.573 Identify (06h): Supported 00:18:18.573 Abort (08h): Supported 00:18:18.573 Set Features (09h): Supported 00:18:18.573 Get Features (0Ah): Supported 00:18:18.573 Asynchronous Event Request (0Ch): Supported 00:18:18.573 Keep Alive (18h): Supported 00:18:18.573 I/O Commands 00:18:18.573 ------------ 00:18:18.573 Flush (00h): Supported 00:18:18.573 Write (01h): Supported LBA-Change 00:18:18.573 Read (02h): Supported 00:18:18.573 Write Zeroes (08h): Supported LBA-Change 00:18:18.573 Dataset Management (09h): Supported 00:18:18.573 00:18:18.573 Error Log 00:18:18.573 ========= 00:18:18.573 Entry: 0 00:18:18.573 Error Count: 0x3 00:18:18.573 Submission Queue Id: 0x0 00:18:18.573 Command Id: 0x5 00:18:18.573 Phase Bit: 0 00:18:18.573 Status Code: 0x2 00:18:18.573 Status Code Type: 0x0 00:18:18.573 Do Not Retry: 1 00:18:18.573 Error Location: 0x28 00:18:18.573 LBA: 0x0 00:18:18.573 Namespace: 0x0 00:18:18.573 Vendor Log Page: 0x0 00:18:18.573 ----------- 00:18:18.573 Entry: 1 00:18:18.573 Error Count: 0x2 00:18:18.573 Submission Queue Id: 0x0 00:18:18.573 Command Id: 0x5 00:18:18.573 Phase Bit: 0 00:18:18.573 Status Code: 0x2 00:18:18.573 Status Code Type: 0x0 00:18:18.573 Do Not Retry: 1 00:18:18.573 Error Location: 0x28 00:18:18.573 LBA: 0x0 00:18:18.573 Namespace: 0x0 00:18:18.573 Vendor Log Page: 0x0 00:18:18.573 ----------- 00:18:18.573 Entry: 2 00:18:18.573 Error Count: 0x1 00:18:18.573 Submission Queue Id: 0x0 00:18:18.573 Command Id: 0x4 00:18:18.573 Phase Bit: 0 00:18:18.573 Status Code: 0x2 00:18:18.573 Status Code Type: 0x0 00:18:18.573 Do Not Retry: 1 00:18:18.573 Error Location: 0x28 00:18:18.573 LBA: 0x0 00:18:18.573 Namespace: 0x0 00:18:18.573 Vendor Log Page: 0x0 00:18:18.573 00:18:18.573 Number of Queues 00:18:18.573 ================ 00:18:18.573 Number of I/O Submission Queues: 128 00:18:18.573 Number of I/O Completion Queues: 128 00:18:18.573 00:18:18.573 ZNS Specific Controller Data 00:18:18.573 ============================ 00:18:18.573 Zone Append Size Limit: 0 00:18:18.573 00:18:18.573 00:18:18.573 Active Namespaces 00:18:18.573 ================= 00:18:18.574 get_feature(0x05) failed 00:18:18.574 Namespace ID:1 00:18:18.574 Command Set Identifier: NVM (00h) 00:18:18.574 Deallocate: Supported 00:18:18.574 Deallocated/Unwritten Error: Not Supported 00:18:18.574 Deallocated Read Value: Unknown 00:18:18.574 Deallocate in Write Zeroes: Not Supported 00:18:18.574 Deallocated Guard Field: 0xFFFF 00:18:18.574 Flush: Supported 00:18:18.574 Reservation: Not Supported 00:18:18.574 Namespace Sharing Capabilities: Multiple Controllers 00:18:18.574 Size (in LBAs): 1310720 (5GiB) 00:18:18.574 Capacity (in LBAs): 1310720 (5GiB) 00:18:18.574 Utilization (in LBAs): 1310720 (5GiB) 00:18:18.574 UUID: 8dcc059d-9e58-4525-9c45-8669bf14650a 00:18:18.574 Thin Provisioning: Not Supported 00:18:18.574 Per-NS Atomic Units: Yes 00:18:18.574 Atomic Boundary Size (Normal): 0 00:18:18.574 Atomic Boundary Size (PFail): 0 00:18:18.574 Atomic Boundary Offset: 0 00:18:18.574 NGUID/EUI64 Never Reused: No 00:18:18.574 ANA group ID: 1 00:18:18.574 Namespace Write Protected: No 00:18:18.574 Number of LBA Formats: 1 00:18:18.574 Current LBA Format: LBA Format #00 00:18:18.574 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:18:18.574 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:18.574 rmmod nvme_tcp 00:18:18.574 rmmod nvme_fabrics 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:18:18.574 16:33:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:19.509 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:19.509 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:19.509 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:19.509 00:18:19.509 real 0m2.807s 00:18:19.509 user 0m0.990s 00:18:19.509 sys 0m1.326s 00:18:19.509 16:33:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:19.509 ************************************ 00:18:19.509 END TEST nvmf_identify_kernel_target 00:18:19.509 ************************************ 00:18:19.509 16:33:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.509 16:33:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:19.509 16:33:37 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:19.509 16:33:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:19.509 16:33:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:19.509 16:33:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:19.509 ************************************ 00:18:19.509 START TEST nvmf_auth_host 00:18:19.509 ************************************ 00:18:19.510 16:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:19.510 * Looking for test storage... 00:18:19.769 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:19.769 16:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:19.769 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:18:19.769 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:19.769 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:19.769 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:19.769 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:19.769 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:19.769 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:19.769 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:19.769 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:19.769 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:19.769 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:19.769 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:18:19.769 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:18:19.769 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:19.769 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:19.769 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:19.769 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:19.769 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:19.769 16:33:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.769 16:33:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.769 16:33:37 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.769 16:33:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:19.770 Cannot find device "nvmf_tgt_br" 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:19.770 Cannot find device "nvmf_tgt_br2" 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:19.770 Cannot find device "nvmf_tgt_br" 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:19.770 Cannot find device "nvmf_tgt_br2" 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:19.770 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:19.770 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:19.770 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:20.029 16:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:20.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:20.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:18:20.029 00:18:20.029 --- 10.0.0.2 ping statistics --- 00:18:20.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.029 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:20.029 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:20.029 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:18:20.029 00:18:20.029 --- 10.0.0.3 ping statistics --- 00:18:20.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.029 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:20.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:20.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:18:20.029 00:18:20.029 --- 10.0.0.1 ping statistics --- 00:18:20.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.029 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=91679 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 91679 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 91679 ']' 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:20.029 16:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.963 16:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:20.963 16:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:18:20.963 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:20.963 16:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:20.963 16:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3e156e913c6ac1e6281ba4db66f39b0d 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.4OK 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3e156e913c6ac1e6281ba4db66f39b0d 0 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3e156e913c6ac1e6281ba4db66f39b0d 0 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3e156e913c6ac1e6281ba4db66f39b0d 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.4OK 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.4OK 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.4OK 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=61166554a7cc5ecbf44c798ecaecff32b5104ebb975af2a555ea875f9c5e36fe 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Osi 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 61166554a7cc5ecbf44c798ecaecff32b5104ebb975af2a555ea875f9c5e36fe 3 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 61166554a7cc5ecbf44c798ecaecff32b5104ebb975af2a555ea875f9c5e36fe 3 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=61166554a7cc5ecbf44c798ecaecff32b5104ebb975af2a555ea875f9c5e36fe 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Osi 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Osi 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Osi 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f7eedef9836863ce2a861e0694e6de10c9802aba4455bd2f 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.RF0 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f7eedef9836863ce2a861e0694e6de10c9802aba4455bd2f 0 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f7eedef9836863ce2a861e0694e6de10c9802aba4455bd2f 0 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f7eedef9836863ce2a861e0694e6de10c9802aba4455bd2f 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.RF0 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.RF0 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.RF0 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:18:21.221 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:21.222 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:21.222 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:21.222 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:18:21.222 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:18:21.222 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:21.222 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0d57412176bc8b629ad230e43147042a8c56a7f872344f66 00:18:21.222 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:21.222 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.GPn 00:18:21.222 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0d57412176bc8b629ad230e43147042a8c56a7f872344f66 2 00:18:21.222 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0d57412176bc8b629ad230e43147042a8c56a7f872344f66 2 00:18:21.222 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:21.222 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:21.222 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0d57412176bc8b629ad230e43147042a8c56a7f872344f66 00:18:21.222 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:18:21.222 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:21.479 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.GPn 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.GPn 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.GPn 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=dce5c3be80174af089a86e6c402e8635 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ux9 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key dce5c3be80174af089a86e6c402e8635 1 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 dce5c3be80174af089a86e6c402e8635 1 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=dce5c3be80174af089a86e6c402e8635 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ux9 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ux9 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.ux9 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5563fcc64b86de31fa3ccf08cd2583ff 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.yav 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5563fcc64b86de31fa3ccf08cd2583ff 1 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5563fcc64b86de31fa3ccf08cd2583ff 1 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5563fcc64b86de31fa3ccf08cd2583ff 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.yav 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.yav 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.yav 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cb467d71b73075b1a399e00b54ab1f757b881c7904e50975 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.HDY 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cb467d71b73075b1a399e00b54ab1f757b881c7904e50975 2 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cb467d71b73075b1a399e00b54ab1f757b881c7904e50975 2 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cb467d71b73075b1a399e00b54ab1f757b881c7904e50975 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.HDY 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.HDY 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.HDY 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=14f715ce50a46e93991c001caeb0e3b1 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.epH 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 14f715ce50a46e93991c001caeb0e3b1 0 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 14f715ce50a46e93991c001caeb0e3b1 0 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=14f715ce50a46e93991c001caeb0e3b1 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:18:21.480 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.epH 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.epH 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.epH 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6e810feb3907222accad6f0de820f532e8f00bffeb812b59f840b63408e7c173 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.jNL 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6e810feb3907222accad6f0de820f532e8f00bffeb812b59f840b63408e7c173 3 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6e810feb3907222accad6f0de820f532e8f00bffeb812b59f840b63408e7c173 3 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6e810feb3907222accad6f0de820f532e8f00bffeb812b59f840b63408e7c173 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.jNL 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.jNL 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.jNL 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 91679 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 91679 ']' 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:21.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:21.738 16:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.4OK 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Osi ]] 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Osi 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.RF0 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.GPn ]] 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.GPn 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ux9 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.yav ]] 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.yav 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.HDY 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.epH ]] 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.epH 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.jNL 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:21.996 16:33:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:22.253 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:22.511 Waiting for block devices as requested 00:18:22.511 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:22.511 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:23.076 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:23.076 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:23.076 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:18:23.076 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:18:23.076 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:23.076 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:23.076 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:18:23.076 16:33:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:18:23.076 16:33:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:23.076 No valid GPT data, bailing 00:18:23.076 16:33:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:23.333 16:33:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:23.333 16:33:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:23.333 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:18:23.333 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:23.333 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:23.333 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:18:23.333 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:18:23.333 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:23.333 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:23.333 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:18:23.333 16:33:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:18:23.333 16:33:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:23.334 No valid GPT data, bailing 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:23.334 No valid GPT data, bailing 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:23.334 No valid GPT data, bailing 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:18:23.334 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -a 10.0.0.1 -t tcp -s 4420 00:18:23.592 00:18:23.592 Discovery Log Number of Records 2, Generation counter 2 00:18:23.592 =====Discovery Log Entry 0====== 00:18:23.592 trtype: tcp 00:18:23.592 adrfam: ipv4 00:18:23.592 subtype: current discovery subsystem 00:18:23.592 treq: not specified, sq flow control disable supported 00:18:23.592 portid: 1 00:18:23.592 trsvcid: 4420 00:18:23.592 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:23.592 traddr: 10.0.0.1 00:18:23.592 eflags: none 00:18:23.592 sectype: none 00:18:23.592 =====Discovery Log Entry 1====== 00:18:23.592 trtype: tcp 00:18:23.592 adrfam: ipv4 00:18:23.592 subtype: nvme subsystem 00:18:23.592 treq: not specified, sq flow control disable supported 00:18:23.592 portid: 1 00:18:23.592 trsvcid: 4420 00:18:23.592 subnqn: nqn.2024-02.io.spdk:cnode0 00:18:23.592 traddr: 10.0.0.1 00:18:23.592 eflags: none 00:18:23.592 sectype: none 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: ]] 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.592 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.851 nvme0n1 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: ]] 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.851 nvme0n1 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:23.851 16:33:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.851 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.851 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:23.851 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.851 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.851 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.851 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:23.851 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:23.851 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:23.851 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:23.851 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:23.852 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:23.852 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:23.852 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:23.852 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:23.852 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:23.852 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:23.852 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: ]] 00:18:23.852 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:23.852 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:18:23.852 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:23.852 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:23.852 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:23.852 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:23.852 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:23.852 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:23.852 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.852 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.111 nvme0n1 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: ]] 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.111 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.370 nvme0n1 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: ]] 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.370 nvme0n1 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.370 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.629 nvme0n1 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:24.629 16:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:24.886 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:24.886 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: ]] 00:18:24.886 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:24.886 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:18:24.886 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:24.886 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:24.886 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:24.886 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:24.886 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:24.886 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:24.886 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.886 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.886 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.886 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:24.887 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:24.887 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:24.887 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:24.887 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:24.887 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:24.887 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:24.887 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:24.887 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:24.887 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:24.887 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:24.887 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.887 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.887 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.144 nvme0n1 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: ]] 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.144 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.402 nvme0n1 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: ]] 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.402 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.660 nvme0n1 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: ]] 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.660 nvme0n1 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.660 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.919 16:33:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.919 nvme0n1 00:18:25.919 16:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.919 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:25.919 16:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.919 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:25.919 16:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.919 16:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.919 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.919 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:25.919 16:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.919 16:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.919 16:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.919 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:25.919 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:25.919 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:18:25.919 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:25.919 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:25.919 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:25.919 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:25.919 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:25.919 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:25.919 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:25.919 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:26.501 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:26.501 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: ]] 00:18:26.501 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:26.501 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:18:26.501 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:26.501 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:26.501 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:26.501 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:26.501 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:26.501 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:26.501 16:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.501 16:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.501 16:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.501 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:26.501 16:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:26.501 16:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:26.501 16:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:26.501 16:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:26.501 16:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:26.501 16:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:26.501 16:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:26.501 16:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:26.501 16:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:26.501 16:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:26.501 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.501 16:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.501 16:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.758 nvme0n1 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: ]] 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.758 16:33:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.016 nvme0n1 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: ]] 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.016 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.275 nvme0n1 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: ]] 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.275 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.533 nvme0n1 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.533 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.534 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:27.534 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:27.534 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:27.534 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:27.534 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:27.534 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:27.534 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:27.534 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:27.534 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:27.534 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:27.534 16:33:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:27.534 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:27.534 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.534 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.791 nvme0n1 00:18:27.791 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.791 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:27.791 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:27.791 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.791 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.791 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.791 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.791 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:27.791 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.791 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.791 16:33:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.791 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:27.791 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:27.791 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:18:27.791 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:27.791 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:27.791 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:27.791 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:27.791 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:27.791 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:27.791 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:27.791 16:33:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: ]] 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.714 nvme0n1 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:29.714 16:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: ]] 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:29.715 16:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:29.973 16:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:29.973 16:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:29.973 16:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:29.973 16:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:29.973 16:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:29.973 16:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:29.973 16:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.973 16:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.973 16:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.231 nvme0n1 00:18:30.231 16:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.231 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:30.231 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:30.231 16:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.231 16:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.231 16:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.231 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.231 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:30.231 16:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.231 16:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.231 16:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.231 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:30.231 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:18:30.231 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:30.231 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:30.231 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:30.231 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:30.231 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:30.231 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:30.231 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:30.231 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:30.231 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:30.231 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: ]] 00:18:30.231 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:30.231 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:18:30.231 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:30.231 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:30.231 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:30.231 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:30.232 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:30.232 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:30.232 16:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.232 16:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.232 16:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.232 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:30.232 16:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:30.232 16:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:30.232 16:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:30.232 16:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:30.232 16:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:30.232 16:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:30.232 16:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:30.232 16:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:30.232 16:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:30.232 16:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:30.232 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.232 16:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.232 16:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.490 nvme0n1 00:18:30.490 16:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.490 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:30.490 16:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.490 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:30.490 16:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.490 16:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.490 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.490 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:30.490 16:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.490 16:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: ]] 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.749 16:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.007 nvme0n1 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.007 16:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.264 nvme0n1 00:18:31.264 16:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.264 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:31.264 16:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.264 16:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.264 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:31.264 16:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.264 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.264 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:31.264 16:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.264 16:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: ]] 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.522 16:33:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.087 nvme0n1 00:18:32.087 16:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: ]] 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.088 16:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.654 nvme0n1 00:18:32.654 16:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.654 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:32.654 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:32.654 16:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.654 16:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.654 16:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.654 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.654 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:32.654 16:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.654 16:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: ]] 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.655 16:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.221 nvme0n1 00:18:33.221 16:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.221 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:33.221 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:33.221 16:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.221 16:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.221 16:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.221 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.221 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:33.221 16:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.221 16:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.221 16:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.221 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:33.221 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:18:33.221 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:33.221 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:33.221 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:33.221 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:33.221 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:33.221 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:33.221 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:33.221 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:33.221 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:33.221 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: ]] 00:18:33.221 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:33.221 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:18:33.221 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:33.221 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:33.221 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:33.221 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:33.222 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:33.222 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:33.222 16:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.222 16:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.222 16:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.222 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:33.222 16:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:33.222 16:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:33.222 16:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:33.222 16:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:33.222 16:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:33.222 16:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:33.222 16:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:33.222 16:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:33.222 16:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:33.222 16:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:33.222 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:33.222 16:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.222 16:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.787 nvme0n1 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.787 16:33:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.353 nvme0n1 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: ]] 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.353 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.612 nvme0n1 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: ]] 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:34.612 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:34.613 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:34.613 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:34.613 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:34.613 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:34.613 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.613 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.613 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.871 nvme0n1 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: ]] 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.871 16:33:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.871 nvme0n1 00:18:34.871 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.871 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:34.871 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:34.871 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.871 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.871 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.871 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.871 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:34.871 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.871 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.871 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.871 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:34.871 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:18:34.871 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:34.871 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:34.871 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: ]] 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.130 nvme0n1 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.130 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.388 nvme0n1 00:18:35.388 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.388 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:35.388 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.388 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:35.388 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.388 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.388 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.388 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:35.388 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.388 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.388 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: ]] 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.389 nvme0n1 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.389 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: ]] 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.661 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.661 nvme0n1 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: ]] 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.662 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.920 nvme0n1 00:18:35.920 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.920 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:35.920 16:33:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:35.920 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.920 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.920 16:33:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: ]] 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:35.920 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.921 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.179 nvme0n1 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.179 nvme0n1 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:36.179 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: ]] 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.437 nvme0n1 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.437 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.695 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.695 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.695 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:36.695 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.695 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.695 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.695 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:36.695 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:18:36.695 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:36.695 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:36.695 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:36.695 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:36.695 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: ]] 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.696 nvme0n1 00:18:36.696 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: ]] 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.954 16:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.212 nvme0n1 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: ]] 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.212 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.470 nvme0n1 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.470 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.728 nvme0n1 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: ]] 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.728 16:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.987 nvme0n1 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: ]] 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.987 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.551 nvme0n1 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: ]] 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.551 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.810 nvme0n1 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: ]] 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.810 16:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.068 nvme0n1 00:18:39.068 16:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.068 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.068 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:39.068 16:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.068 16:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.068 16:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.326 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.326 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.326 16:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.326 16:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.326 16:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.326 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:39.326 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:18:39.326 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.326 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:39.326 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:39.326 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:39.326 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:39.326 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:39.326 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:39.326 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:39.326 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:39.326 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:39.326 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:18:39.326 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:39.326 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:39.326 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:39.326 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:39.326 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:39.327 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:39.327 16:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.327 16:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.327 16:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.327 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:39.327 16:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:39.327 16:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:39.327 16:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:39.327 16:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.327 16:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.327 16:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:39.327 16:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.327 16:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:39.327 16:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:39.327 16:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:39.327 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:39.327 16:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.327 16:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.585 nvme0n1 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: ]] 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.585 16:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.151 nvme0n1 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: ]] 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.151 16:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.717 nvme0n1 00:18:40.717 16:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.717 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:40.717 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:40.717 16:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.717 16:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.717 16:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.717 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.717 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:40.717 16:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.717 16:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.975 16:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.975 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:40.975 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:18:40.975 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:40.975 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:40.975 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:40.975 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:40.975 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:40.975 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:40.975 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:40.975 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:40.975 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:40.975 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: ]] 00:18:40.975 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:40.975 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:18:40.975 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:40.975 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:40.975 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:40.975 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:40.975 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:40.975 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:40.975 16:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.975 16:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.975 16:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.975 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:40.975 16:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:40.975 16:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:40.976 16:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:40.976 16:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.976 16:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.976 16:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:40.976 16:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.976 16:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:40.976 16:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:40.976 16:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:40.976 16:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.976 16:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.976 16:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.598 nvme0n1 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: ]] 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:41.598 16:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:41.599 16:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:41.599 16:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.599 16:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.856 nvme0n1 00:18:41.856 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.115 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.681 nvme0n1 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: ]] 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.681 nvme0n1 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.681 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: ]] 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.940 nvme0n1 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.940 16:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: ]] 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.940 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.199 nvme0n1 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: ]] 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:43.199 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:43.200 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:43.200 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.200 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.200 nvme0n1 00:18:43.200 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.200 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:43.200 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:43.200 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.200 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.200 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.200 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.200 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:43.200 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.200 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.458 nvme0n1 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: ]] 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.458 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.729 nvme0n1 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: ]] 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.729 nvme0n1 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.729 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.987 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.987 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:43.987 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.987 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.987 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.987 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:43.987 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:18:43.987 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:43.987 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:43.987 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:43.987 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:43.987 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:43.987 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:43.987 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:43.987 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:43.987 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:43.987 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: ]] 00:18:43.987 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:43.987 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:18:43.987 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:43.987 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:43.987 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:43.987 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:43.987 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:43.987 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:43.988 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.988 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.988 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.988 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:43.988 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:43.988 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:43.988 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:43.988 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:43.988 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:43.988 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:43.988 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:43.988 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:43.988 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:43.988 16:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:43.988 16:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.988 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.988 16:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.988 nvme0n1 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: ]] 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.988 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.245 nvme0n1 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.246 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.503 nvme0n1 00:18:44.503 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.503 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:44.503 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:44.503 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.503 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.503 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.503 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.503 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:44.503 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.503 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.503 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.503 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:44.503 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:44.503 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:18:44.503 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:44.503 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:44.503 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:44.503 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: ]] 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.504 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.762 nvme0n1 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: ]] 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.762 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.020 nvme0n1 00:18:45.020 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.020 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:45.020 16:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:45.020 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.020 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.020 16:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: ]] 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.020 nvme0n1 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.020 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.278 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.278 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.278 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:45.278 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.278 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.278 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.278 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:45.278 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:18:45.278 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:45.278 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:45.278 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: ]] 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.279 nvme0n1 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.279 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.537 nvme0n1 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.537 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.795 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.795 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:45.795 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.795 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.795 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.795 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:45.795 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:45.795 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:18:45.795 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:45.795 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:45.795 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:45.795 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:45.795 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: ]] 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.796 16:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.054 nvme0n1 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: ]] 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.054 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.312 nvme0n1 00:18:46.312 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.312 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.312 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.312 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:46.312 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.312 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.570 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.570 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.570 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.570 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.570 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.570 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:46.570 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:18:46.570 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:46.570 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:46.570 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:46.570 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:46.570 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:46.570 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:46.570 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:46.570 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:46.570 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:46.570 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: ]] 00:18:46.570 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:46.570 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:18:46.570 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:46.570 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:46.570 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:46.570 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:46.570 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:46.571 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:46.571 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.571 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.571 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.571 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:46.571 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:46.571 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:46.571 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:46.571 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.571 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.571 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:46.571 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:46.571 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:46.571 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:46.571 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:46.571 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.571 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.571 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.829 nvme0n1 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: ]] 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.829 16:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.093 nvme0n1 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.093 16:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.661 nvme0n1 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2UxNTZlOTEzYzZhYzFlNjI4MWJhNGRiNjZmMzliMGQTwUlX: 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: ]] 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjExNjY1NTRhN2NjNWVjYmY0NGM3OThlY2FlY2ZmMzJiNTEwNGViYjk3NWFmMmE1NTVlYTg3NWY5YzVlMzZmZcKC4Sk=: 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.661 16:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.229 nvme0n1 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: ]] 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.229 16:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.797 nvme0n1 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGNlNWMzYmU4MDE3NGFmMDg5YTg2ZTZjNDAyZTg2MzXVJKDM: 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: ]] 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTU2M2ZjYzY0Yjg2ZGUzMWZhM2NjZjA4Y2QyNTgzZmb84JOP: 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.797 16:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.363 nvme0n1 00:18:49.363 16:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.363 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:49.363 16:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.363 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:49.363 16:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.363 16:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.363 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.363 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:49.363 16:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.363 16:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.363 16:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.363 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:49.363 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:18:49.363 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:49.363 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:49.363 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:49.363 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:49.363 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:49.363 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:49.363 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:49.363 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:49.363 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I0NjdkNzFiNzMwNzViMWEzOTllMDBiNTRhYjFmNzU3Yjg4MWM3OTA0ZTUwOTc1Q5nZfA==: 00:18:49.363 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: ]] 00:18:49.363 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTRmNzE1Y2U1MGE0NmU5Mzk5MWMwMDFjYWViMGUzYjFJlkcZ: 00:18:49.363 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:18:49.363 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:49.363 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:49.363 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:49.364 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:49.364 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:49.364 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:49.364 16:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.364 16:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.364 16:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.364 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:49.364 16:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:49.364 16:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:49.364 16:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:49.364 16:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:49.364 16:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:49.364 16:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:49.364 16:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:49.364 16:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:49.364 16:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:49.364 16:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:49.364 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:49.364 16:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.364 16:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.930 nvme0n1 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmU4MTBmZWIzOTA3MjIyYWNjYWQ2ZjBkZTgyMGY1MzJlOGYwMGJmZmViODEyYjU5Zjg0MGI2MzQwOGU3YzE3Myccbus=: 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.930 16:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.497 nvme0n1 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjdlZWRlZjk4MzY4NjNjZTJhODYxZTA2OTRlNmRlMTBjOTgwMmFiYTQ0NTViZDJm4AVcQA==: 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: ]] 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ1NzQxMjE3NmJjOGI2MjlhZDIzMGU0MzE0NzA0MmE4YzU2YTdmODcyMzQ0ZjY2ItL9Fw==: 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.497 2024/07/21 16:34:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:50.497 request: 00:18:50.497 { 00:18:50.497 "method": "bdev_nvme_attach_controller", 00:18:50.497 "params": { 00:18:50.497 "name": "nvme0", 00:18:50.497 "trtype": "tcp", 00:18:50.497 "traddr": "10.0.0.1", 00:18:50.497 "adrfam": "ipv4", 00:18:50.497 "trsvcid": "4420", 00:18:50.497 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:50.497 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:50.497 "prchk_reftag": false, 00:18:50.497 "prchk_guard": false, 00:18:50.497 "hdgst": false, 00:18:50.497 "ddgst": false 00:18:50.497 } 00:18:50.497 } 00:18:50.497 Got JSON-RPC error response 00:18:50.497 GoRPCClient: error on JSON-RPC call 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.497 2024/07/21 16:34:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:50.497 request: 00:18:50.497 { 00:18:50.497 "method": "bdev_nvme_attach_controller", 00:18:50.497 "params": { 00:18:50.497 "name": "nvme0", 00:18:50.497 "trtype": "tcp", 00:18:50.497 "traddr": "10.0.0.1", 00:18:50.497 "adrfam": "ipv4", 00:18:50.497 "trsvcid": "4420", 00:18:50.497 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:50.497 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:50.497 "prchk_reftag": false, 00:18:50.497 "prchk_guard": false, 00:18:50.497 "hdgst": false, 00:18:50.497 "ddgst": false, 00:18:50.497 "dhchap_key": "key2" 00:18:50.497 } 00:18:50.497 } 00:18:50.497 Got JSON-RPC error response 00:18:50.497 GoRPCClient: error on JSON-RPC call 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.497 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.756 2024/07/21 16:34:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:50.756 request: 00:18:50.756 { 00:18:50.756 "method": "bdev_nvme_attach_controller", 00:18:50.756 "params": { 00:18:50.756 "name": "nvme0", 00:18:50.756 "trtype": "tcp", 00:18:50.756 "traddr": "10.0.0.1", 00:18:50.756 "adrfam": "ipv4", 00:18:50.756 "trsvcid": "4420", 00:18:50.756 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:50.756 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:50.756 "prchk_reftag": false, 00:18:50.756 "prchk_guard": false, 00:18:50.756 "hdgst": false, 00:18:50.756 "ddgst": false, 00:18:50.756 "dhchap_key": "key1", 00:18:50.756 "dhchap_ctrlr_key": "ckey2" 00:18:50.756 } 00:18:50.756 } 00:18:50.756 Got JSON-RPC error response 00:18:50.756 GoRPCClient: error on JSON-RPC call 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:50.756 rmmod nvme_tcp 00:18:50.756 rmmod nvme_fabrics 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 91679 ']' 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 91679 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 91679 ']' 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 91679 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91679 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:50.756 killing process with pid 91679 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91679' 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 91679 00:18:50.756 16:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 91679 00:18:51.014 16:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:51.014 16:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:51.014 16:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:51.014 16:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:51.014 16:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:51.014 16:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.014 16:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:51.014 16:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.014 16:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:51.014 16:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:51.014 16:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:51.014 16:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:18:51.014 16:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:18:51.014 16:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:18:51.014 16:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:51.014 16:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:51.014 16:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:51.014 16:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:51.273 16:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:18:51.273 16:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:18:51.273 16:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:51.839 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:51.839 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:52.097 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:52.097 16:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.4OK /tmp/spdk.key-null.RF0 /tmp/spdk.key-sha256.ux9 /tmp/spdk.key-sha384.HDY /tmp/spdk.key-sha512.jNL /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:18:52.097 16:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:52.355 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:52.355 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:52.355 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:52.355 00:18:52.355 real 0m32.914s 00:18:52.355 user 0m30.265s 00:18:52.355 sys 0m3.737s 00:18:52.355 16:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:52.355 16:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.355 ************************************ 00:18:52.355 END TEST nvmf_auth_host 00:18:52.355 ************************************ 00:18:52.614 16:34:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:52.614 16:34:10 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:18:52.614 16:34:10 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:52.614 16:34:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:52.614 16:34:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:52.614 16:34:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:52.614 ************************************ 00:18:52.614 START TEST nvmf_digest 00:18:52.614 ************************************ 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:52.614 * Looking for test storage... 00:18:52.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:52.614 Cannot find device "nvmf_tgt_br" 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:52.614 Cannot find device "nvmf_tgt_br2" 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:18:52.614 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:52.615 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:52.615 Cannot find device "nvmf_tgt_br" 00:18:52.615 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:18:52.615 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:52.615 Cannot find device "nvmf_tgt_br2" 00:18:52.615 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:18:52.615 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:52.874 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:52.874 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:52.874 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:52.874 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:18:52.874 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:52.874 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:52.874 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:18:52.874 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:52.874 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:52.874 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:52.874 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:52.874 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:52.874 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:52.874 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:52.874 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:52.874 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:52.874 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:52.874 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:52.874 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:52.874 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:52.874 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:52.874 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:52.874 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:52.874 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:52.874 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:52.874 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:52.874 16:34:10 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:52.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:52.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:18:52.874 00:18:52.874 --- 10.0.0.2 ping statistics --- 00:18:52.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.874 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:52.874 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:52.874 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:18:52.874 00:18:52.874 --- 10.0.0.3 ping statistics --- 00:18:52.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.874 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:52.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:52.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:18:52.874 00:18:52.874 --- 10.0.0.1 ping statistics --- 00:18:52.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.874 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:52.874 ************************************ 00:18:52.874 START TEST nvmf_digest_clean 00:18:52.874 ************************************ 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=93246 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 93246 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93246 ']' 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:52.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:52.874 16:34:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:53.132 [2024-07-21 16:34:11.129587] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:18:53.132 [2024-07-21 16:34:11.129682] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.132 [2024-07-21 16:34:11.271588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.390 [2024-07-21 16:34:11.387926] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.390 [2024-07-21 16:34:11.388428] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.390 [2024-07-21 16:34:11.388561] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.390 [2024-07-21 16:34:11.388699] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.390 [2024-07-21 16:34:11.388822] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.390 [2024-07-21 16:34:11.388978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.962 16:34:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:53.962 16:34:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:53.962 16:34:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:53.962 16:34:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:53.962 16:34:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:54.254 16:34:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.254 16:34:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:18:54.254 16:34:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:18:54.254 16:34:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:18:54.254 16:34:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.254 16:34:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:54.254 null0 00:18:54.254 [2024-07-21 16:34:12.308872] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:54.254 [2024-07-21 16:34:12.333033] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:54.254 16:34:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.254 16:34:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:18:54.254 16:34:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:54.254 16:34:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:54.254 16:34:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:54.254 16:34:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:54.254 16:34:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:54.254 16:34:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:54.254 16:34:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93297 00:18:54.254 16:34:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:54.254 16:34:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93297 /var/tmp/bperf.sock 00:18:54.254 16:34:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93297 ']' 00:18:54.254 16:34:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:54.254 16:34:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:54.254 16:34:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:54.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:54.255 16:34:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:54.255 16:34:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:54.255 [2024-07-21 16:34:12.385910] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:18:54.255 [2024-07-21 16:34:12.385998] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93297 ] 00:18:54.530 [2024-07-21 16:34:12.521133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.530 [2024-07-21 16:34:12.625847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.460 16:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:55.460 16:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:55.460 16:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:55.460 16:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:55.460 16:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:55.722 16:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:55.722 16:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:55.978 nvme0n1 00:18:55.978 16:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:55.978 16:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:55.979 Running I/O for 2 seconds... 00:18:58.511 00:18:58.511 Latency(us) 00:18:58.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.511 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:58.511 nvme0n1 : 2.00 23171.43 90.51 0.00 0.00 5519.12 2978.91 16443.58 00:18:58.511 =================================================================================================================== 00:18:58.511 Total : 23171.43 90.51 0.00 0.00 5519.12 2978.91 16443.58 00:18:58.511 0 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:58.511 | select(.opcode=="crc32c") 00:18:58.511 | "\(.module_name) \(.executed)"' 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93297 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93297 ']' 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93297 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93297 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93297' 00:18:58.511 killing process with pid 93297 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93297 00:18:58.511 Received shutdown signal, test time was about 2.000000 seconds 00:18:58.511 00:18:58.511 Latency(us) 00:18:58.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.511 =================================================================================================================== 00:18:58.511 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93297 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93389 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93389 /var/tmp/bperf.sock 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93389 ']' 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:58.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:58.511 16:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:58.769 [2024-07-21 16:34:16.751187] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:18:58.769 [2024-07-21 16:34:16.751313] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93389 ] 00:18:58.769 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:58.769 Zero copy mechanism will not be used. 00:18:58.769 [2024-07-21 16:34:16.889056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.027 [2024-07-21 16:34:17.004403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.591 16:34:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:59.591 16:34:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:59.591 16:34:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:59.591 16:34:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:59.591 16:34:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:59.847 16:34:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:59.847 16:34:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:00.105 nvme0n1 00:19:00.105 16:34:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:00.105 16:34:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:00.362 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:00.362 Zero copy mechanism will not be used. 00:19:00.362 Running I/O for 2 seconds... 00:19:02.259 00:19:02.259 Latency(us) 00:19:02.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.259 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:02.259 nvme0n1 : 2.04 9680.54 1210.07 0.00 0.00 1620.98 536.20 41943.04 00:19:02.259 =================================================================================================================== 00:19:02.259 Total : 9680.54 1210.07 0.00 0.00 1620.98 536.20 41943.04 00:19:02.259 0 00:19:02.259 16:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:02.259 16:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:02.259 16:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:02.259 16:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:02.259 | select(.opcode=="crc32c") 00:19:02.259 | "\(.module_name) \(.executed)"' 00:19:02.259 16:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:02.826 16:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:02.826 16:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:02.826 16:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:02.826 16:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:02.826 16:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93389 00:19:02.826 16:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93389 ']' 00:19:02.826 16:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93389 00:19:02.826 16:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:02.826 16:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:02.826 16:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93389 00:19:02.826 killing process with pid 93389 00:19:02.826 Received shutdown signal, test time was about 2.000000 seconds 00:19:02.826 00:19:02.826 Latency(us) 00:19:02.826 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.826 =================================================================================================================== 00:19:02.826 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:02.826 16:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:02.826 16:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:02.826 16:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93389' 00:19:02.826 16:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93389 00:19:02.826 16:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93389 00:19:02.826 16:34:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:19:02.826 16:34:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:02.826 16:34:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:02.826 16:34:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:02.826 16:34:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:02.826 16:34:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:02.826 16:34:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:02.826 16:34:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93479 00:19:02.826 16:34:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93479 /var/tmp/bperf.sock 00:19:02.826 16:34:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:02.826 16:34:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93479 ']' 00:19:02.826 16:34:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:02.826 16:34:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:02.826 16:34:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:02.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:02.826 16:34:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:02.826 16:34:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:03.085 [2024-07-21 16:34:21.077317] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:19:03.085 [2024-07-21 16:34:21.077633] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93479 ] 00:19:03.085 [2024-07-21 16:34:21.215199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.085 [2024-07-21 16:34:21.291342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.019 16:34:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:04.019 16:34:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:04.019 16:34:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:04.019 16:34:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:04.020 16:34:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:04.278 16:34:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:04.278 16:34:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:04.535 nvme0n1 00:19:04.535 16:34:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:04.535 16:34:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:04.794 Running I/O for 2 seconds... 00:19:06.700 00:19:06.700 Latency(us) 00:19:06.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.700 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:06.700 nvme0n1 : 2.00 27547.21 107.61 0.00 0.00 4641.87 2383.13 12630.57 00:19:06.700 =================================================================================================================== 00:19:06.700 Total : 27547.21 107.61 0.00 0.00 4641.87 2383.13 12630.57 00:19:06.700 0 00:19:06.700 16:34:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:06.700 16:34:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:06.700 16:34:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:06.700 16:34:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:06.700 | select(.opcode=="crc32c") 00:19:06.700 | "\(.module_name) \(.executed)"' 00:19:06.700 16:34:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:06.958 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:06.958 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:06.958 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:06.958 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:06.958 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93479 00:19:06.958 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93479 ']' 00:19:06.959 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93479 00:19:06.959 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:06.959 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:06.959 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93479 00:19:06.959 killing process with pid 93479 00:19:06.959 Received shutdown signal, test time was about 2.000000 seconds 00:19:06.959 00:19:06.959 Latency(us) 00:19:06.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.959 =================================================================================================================== 00:19:06.959 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:06.959 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:06.959 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:06.959 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93479' 00:19:06.959 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93479 00:19:06.959 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93479 00:19:07.217 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:19:07.217 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:07.217 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:07.217 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:07.217 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:07.217 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:07.217 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:07.217 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93569 00:19:07.217 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93569 /var/tmp/bperf.sock 00:19:07.217 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:07.217 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93569 ']' 00:19:07.217 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:07.217 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:07.217 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:07.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:07.217 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:07.217 16:34:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:07.217 [2024-07-21 16:34:25.421113] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:19:07.217 [2024-07-21 16:34:25.421447] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93569 ] 00:19:07.217 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:07.217 Zero copy mechanism will not be used. 00:19:07.475 [2024-07-21 16:34:25.560033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.475 [2024-07-21 16:34:25.639029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.409 16:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:08.409 16:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:08.409 16:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:08.409 16:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:08.409 16:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:08.684 16:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:08.684 16:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:08.961 nvme0n1 00:19:08.962 16:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:08.962 16:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:08.962 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:08.962 Zero copy mechanism will not be used. 00:19:08.962 Running I/O for 2 seconds... 00:19:11.491 00:19:11.491 Latency(us) 00:19:11.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.491 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:11.491 nvme0n1 : 2.00 6775.36 846.92 0.00 0.00 2356.71 1377.75 3902.37 00:19:11.491 =================================================================================================================== 00:19:11.491 Total : 6775.36 846.92 0.00 0.00 2356.71 1377.75 3902.37 00:19:11.491 0 00:19:11.491 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:11.491 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:11.491 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:11.491 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:11.491 | select(.opcode=="crc32c") 00:19:11.491 | "\(.module_name) \(.executed)"' 00:19:11.491 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:11.491 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:11.491 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:11.491 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:11.491 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:11.491 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93569 00:19:11.491 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93569 ']' 00:19:11.491 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93569 00:19:11.491 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:11.491 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:11.491 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93569 00:19:11.491 killing process with pid 93569 00:19:11.491 Received shutdown signal, test time was about 2.000000 seconds 00:19:11.491 00:19:11.491 Latency(us) 00:19:11.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.491 =================================================================================================================== 00:19:11.491 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:11.491 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:11.491 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:11.491 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93569' 00:19:11.491 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93569 00:19:11.491 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93569 00:19:11.749 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 93246 00:19:11.749 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93246 ']' 00:19:11.749 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93246 00:19:11.749 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:11.749 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:11.749 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93246 00:19:11.749 killing process with pid 93246 00:19:11.749 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:11.749 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:11.749 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93246' 00:19:11.749 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93246 00:19:11.749 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93246 00:19:12.007 00:19:12.007 real 0m18.937s 00:19:12.007 user 0m35.575s 00:19:12.007 sys 0m4.859s 00:19:12.007 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:12.007 16:34:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:12.007 ************************************ 00:19:12.007 END TEST nvmf_digest_clean 00:19:12.007 ************************************ 00:19:12.007 16:34:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:19:12.007 16:34:30 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:19:12.007 16:34:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:12.007 16:34:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:12.007 16:34:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:12.007 ************************************ 00:19:12.007 START TEST nvmf_digest_error 00:19:12.007 ************************************ 00:19:12.007 16:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:19:12.007 16:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:19:12.007 16:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:12.007 16:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:12.007 16:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:12.007 16:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=93685 00:19:12.007 16:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 93685 00:19:12.007 16:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93685 ']' 00:19:12.007 16:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.007 16:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:12.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.007 16:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.007 16:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:12.007 16:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:12.007 16:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:12.007 [2024-07-21 16:34:30.123001] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:19:12.007 [2024-07-21 16:34:30.123119] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.265 [2024-07-21 16:34:30.261155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.265 [2024-07-21 16:34:30.356357] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.265 [2024-07-21 16:34:30.356438] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.265 [2024-07-21 16:34:30.356458] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:12.265 [2024-07-21 16:34:30.356466] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:12.265 [2024-07-21 16:34:30.356473] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.265 [2024-07-21 16:34:30.356507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.198 16:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:13.198 16:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:19:13.198 16:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:13.198 16:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:13.198 16:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:13.198 16:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.198 16:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:19:13.198 16:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.198 16:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:13.198 [2024-07-21 16:34:31.105016] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:19:13.198 16:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.198 16:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:19:13.198 16:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:19:13.198 16:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.198 16:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:13.198 null0 00:19:13.198 [2024-07-21 16:34:31.239126] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:13.198 [2024-07-21 16:34:31.263303] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:13.198 16:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.198 16:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:19:13.198 16:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:13.198 16:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:13.198 16:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:13.198 16:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:13.198 16:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:19:13.198 16:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93729 00:19:13.198 16:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93729 /var/tmp/bperf.sock 00:19:13.198 16:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93729 ']' 00:19:13.198 16:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:13.198 16:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:13.199 16:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:13.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:13.199 16:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:13.199 16:34:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:13.199 [2024-07-21 16:34:31.313777] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:19:13.199 [2024-07-21 16:34:31.313896] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93729 ] 00:19:13.456 [2024-07-21 16:34:31.449505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.456 [2024-07-21 16:34:31.553005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.390 16:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:14.390 16:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:19:14.390 16:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:14.390 16:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:14.390 16:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:14.390 16:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.390 16:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:14.390 16:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.390 16:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:14.390 16:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:14.648 nvme0n1 00:19:14.907 16:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:14.907 16:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.907 16:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:14.907 16:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.907 16:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:14.907 16:34:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:14.907 Running I/O for 2 seconds... 00:19:14.907 [2024-07-21 16:34:32.983294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:14.907 [2024-07-21 16:34:32.983365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.907 [2024-07-21 16:34:32.983380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:14.907 [2024-07-21 16:34:32.994696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:14.907 [2024-07-21 16:34:32.994752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.907 [2024-07-21 16:34:32.994772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:14.907 [2024-07-21 16:34:33.004813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:14.907 [2024-07-21 16:34:33.004867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.907 [2024-07-21 16:34:33.004879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:14.907 [2024-07-21 16:34:33.016845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:14.907 [2024-07-21 16:34:33.016898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.907 [2024-07-21 16:34:33.016911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:14.907 [2024-07-21 16:34:33.026101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:14.907 [2024-07-21 16:34:33.026153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.907 [2024-07-21 16:34:33.026166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:14.907 [2024-07-21 16:34:33.037861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:14.907 [2024-07-21 16:34:33.037913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.907 [2024-07-21 16:34:33.037926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:14.907 [2024-07-21 16:34:33.049836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:14.907 [2024-07-21 16:34:33.049890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.907 [2024-07-21 16:34:33.049903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:14.908 [2024-07-21 16:34:33.060736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:14.908 [2024-07-21 16:34:33.060788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.908 [2024-07-21 16:34:33.060802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:14.908 [2024-07-21 16:34:33.070013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:14.908 [2024-07-21 16:34:33.070066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.908 [2024-07-21 16:34:33.070078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:14.908 [2024-07-21 16:34:33.082206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:14.908 [2024-07-21 16:34:33.082278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.908 [2024-07-21 16:34:33.082292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:14.908 [2024-07-21 16:34:33.092420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:14.908 [2024-07-21 16:34:33.092471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.908 [2024-07-21 16:34:33.092485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:14.908 [2024-07-21 16:34:33.103998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:14.908 [2024-07-21 16:34:33.104051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.908 [2024-07-21 16:34:33.104064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.168 [2024-07-21 16:34:33.115245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.168 [2024-07-21 16:34:33.115314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.168 [2024-07-21 16:34:33.115326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.168 [2024-07-21 16:34:33.126014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.168 [2024-07-21 16:34:33.126066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.168 [2024-07-21 16:34:33.126079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.168 [2024-07-21 16:34:33.135962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.168 [2024-07-21 16:34:33.136015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.168 [2024-07-21 16:34:33.136027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.168 [2024-07-21 16:34:33.147329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.168 [2024-07-21 16:34:33.147382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.168 [2024-07-21 16:34:33.147394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.168 [2024-07-21 16:34:33.157732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.168 [2024-07-21 16:34:33.157786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.168 [2024-07-21 16:34:33.157799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.168 [2024-07-21 16:34:33.169965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.168 [2024-07-21 16:34:33.170018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.168 [2024-07-21 16:34:33.170031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.168 [2024-07-21 16:34:33.180447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.168 [2024-07-21 16:34:33.180498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.168 [2024-07-21 16:34:33.180511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.168 [2024-07-21 16:34:33.190651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.168 [2024-07-21 16:34:33.190709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.168 [2024-07-21 16:34:33.190722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.168 [2024-07-21 16:34:33.201716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.168 [2024-07-21 16:34:33.201767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.168 [2024-07-21 16:34:33.201780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.168 [2024-07-21 16:34:33.213893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.168 [2024-07-21 16:34:33.213944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.168 [2024-07-21 16:34:33.213958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.168 [2024-07-21 16:34:33.223715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.168 [2024-07-21 16:34:33.223766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.168 [2024-07-21 16:34:33.223788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.168 [2024-07-21 16:34:33.233673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.168 [2024-07-21 16:34:33.233725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.168 [2024-07-21 16:34:33.233744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.168 [2024-07-21 16:34:33.245807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.168 [2024-07-21 16:34:33.245858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.168 [2024-07-21 16:34:33.245871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.168 [2024-07-21 16:34:33.255972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.168 [2024-07-21 16:34:33.256023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.168 [2024-07-21 16:34:33.256035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.168 [2024-07-21 16:34:33.267719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.168 [2024-07-21 16:34:33.267773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.168 [2024-07-21 16:34:33.267785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.168 [2024-07-21 16:34:33.279322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.168 [2024-07-21 16:34:33.279374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.168 [2024-07-21 16:34:33.279387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.168 [2024-07-21 16:34:33.288774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.168 [2024-07-21 16:34:33.288826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.168 [2024-07-21 16:34:33.288838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.168 [2024-07-21 16:34:33.299663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.168 [2024-07-21 16:34:33.299715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.168 [2024-07-21 16:34:33.299727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.168 [2024-07-21 16:34:33.310994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.168 [2024-07-21 16:34:33.311046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.168 [2024-07-21 16:34:33.311059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.168 [2024-07-21 16:34:33.320948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.168 [2024-07-21 16:34:33.321000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.168 [2024-07-21 16:34:33.321012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.168 [2024-07-21 16:34:33.333018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.168 [2024-07-21 16:34:33.333070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.168 [2024-07-21 16:34:33.333083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.168 [2024-07-21 16:34:33.343425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.168 [2024-07-21 16:34:33.343479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.168 [2024-07-21 16:34:33.343492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.168 [2024-07-21 16:34:33.353661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.168 [2024-07-21 16:34:33.353713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.168 [2024-07-21 16:34:33.353726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.168 [2024-07-21 16:34:33.364664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.168 [2024-07-21 16:34:33.364717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.168 [2024-07-21 16:34:33.364729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.427 [2024-07-21 16:34:33.375488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.427 [2024-07-21 16:34:33.375544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.427 [2024-07-21 16:34:33.375563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.427 [2024-07-21 16:34:33.386142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.427 [2024-07-21 16:34:33.386194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.427 [2024-07-21 16:34:33.386207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.427 [2024-07-21 16:34:33.398116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.427 [2024-07-21 16:34:33.398168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.427 [2024-07-21 16:34:33.398181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.427 [2024-07-21 16:34:33.410661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.427 [2024-07-21 16:34:33.410713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.427 [2024-07-21 16:34:33.410726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.427 [2024-07-21 16:34:33.419837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.427 [2024-07-21 16:34:33.419890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.427 [2024-07-21 16:34:33.419902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.427 [2024-07-21 16:34:33.430992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.427 [2024-07-21 16:34:33.431044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.427 [2024-07-21 16:34:33.431057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.427 [2024-07-21 16:34:33.441913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.427 [2024-07-21 16:34:33.441966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.427 [2024-07-21 16:34:33.441979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.427 [2024-07-21 16:34:33.453908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.427 [2024-07-21 16:34:33.453960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.427 [2024-07-21 16:34:33.453973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.427 [2024-07-21 16:34:33.464974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.427 [2024-07-21 16:34:33.465026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.427 [2024-07-21 16:34:33.465039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.427 [2024-07-21 16:34:33.473859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.427 [2024-07-21 16:34:33.473911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.427 [2024-07-21 16:34:33.473931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.427 [2024-07-21 16:34:33.485064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.427 [2024-07-21 16:34:33.485116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.427 [2024-07-21 16:34:33.485134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.427 [2024-07-21 16:34:33.496473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.427 [2024-07-21 16:34:33.496525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.427 [2024-07-21 16:34:33.496544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.427 [2024-07-21 16:34:33.507339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.427 [2024-07-21 16:34:33.507390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.427 [2024-07-21 16:34:33.507402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.427 [2024-07-21 16:34:33.517567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.427 [2024-07-21 16:34:33.517619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.427 [2024-07-21 16:34:33.517631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.427 [2024-07-21 16:34:33.527552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.427 [2024-07-21 16:34:33.527603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.427 [2024-07-21 16:34:33.527621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.427 [2024-07-21 16:34:33.539772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.427 [2024-07-21 16:34:33.539824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.427 [2024-07-21 16:34:33.539836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.427 [2024-07-21 16:34:33.550754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.427 [2024-07-21 16:34:33.550806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.427 [2024-07-21 16:34:33.550819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.427 [2024-07-21 16:34:33.561054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.427 [2024-07-21 16:34:33.561105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.427 [2024-07-21 16:34:33.561118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.427 [2024-07-21 16:34:33.570880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.427 [2024-07-21 16:34:33.570931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.427 [2024-07-21 16:34:33.570944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.427 [2024-07-21 16:34:33.580875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.427 [2024-07-21 16:34:33.580926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.427 [2024-07-21 16:34:33.580938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.427 [2024-07-21 16:34:33.592294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.427 [2024-07-21 16:34:33.592346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.427 [2024-07-21 16:34:33.592365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.427 [2024-07-21 16:34:33.602257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.427 [2024-07-21 16:34:33.602318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.427 [2024-07-21 16:34:33.602367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.427 [2024-07-21 16:34:33.615732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.427 [2024-07-21 16:34:33.615784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.427 [2024-07-21 16:34:33.615796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.427 [2024-07-21 16:34:33.629945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.427 [2024-07-21 16:34:33.630004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.427 [2024-07-21 16:34:33.630023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.686 [2024-07-21 16:34:33.638113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.686 [2024-07-21 16:34:33.638165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.686 [2024-07-21 16:34:33.638183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.686 [2024-07-21 16:34:33.650960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.686 [2024-07-21 16:34:33.651012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.686 [2024-07-21 16:34:33.651024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.686 [2024-07-21 16:34:33.661988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.686 [2024-07-21 16:34:33.662040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.686 [2024-07-21 16:34:33.662052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.686 [2024-07-21 16:34:33.672541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.686 [2024-07-21 16:34:33.672592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.686 [2024-07-21 16:34:33.672611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.686 [2024-07-21 16:34:33.682715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.686 [2024-07-21 16:34:33.682767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.686 [2024-07-21 16:34:33.682780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.686 [2024-07-21 16:34:33.693704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.686 [2024-07-21 16:34:33.693755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.686 [2024-07-21 16:34:33.693768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.686 [2024-07-21 16:34:33.703948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.686 [2024-07-21 16:34:33.703999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.686 [2024-07-21 16:34:33.704011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.686 [2024-07-21 16:34:33.715697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.686 [2024-07-21 16:34:33.715749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.686 [2024-07-21 16:34:33.715761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.686 [2024-07-21 16:34:33.725077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.686 [2024-07-21 16:34:33.725128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.686 [2024-07-21 16:34:33.725140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.686 [2024-07-21 16:34:33.736173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.686 [2024-07-21 16:34:33.736225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.686 [2024-07-21 16:34:33.736237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.686 [2024-07-21 16:34:33.748554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.686 [2024-07-21 16:34:33.748605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.686 [2024-07-21 16:34:33.748623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.686 [2024-07-21 16:34:33.759342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.686 [2024-07-21 16:34:33.759393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.686 [2024-07-21 16:34:33.759406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.686 [2024-07-21 16:34:33.770611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.686 [2024-07-21 16:34:33.770663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.686 [2024-07-21 16:34:33.770676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.686 [2024-07-21 16:34:33.782717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.686 [2024-07-21 16:34:33.782770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.686 [2024-07-21 16:34:33.782783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.686 [2024-07-21 16:34:33.790975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.686 [2024-07-21 16:34:33.791026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.686 [2024-07-21 16:34:33.791039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.686 [2024-07-21 16:34:33.803846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.686 [2024-07-21 16:34:33.803898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.686 [2024-07-21 16:34:33.803910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.686 [2024-07-21 16:34:33.815314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.686 [2024-07-21 16:34:33.815364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.686 [2024-07-21 16:34:33.815376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.686 [2024-07-21 16:34:33.824633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.686 [2024-07-21 16:34:33.824684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.686 [2024-07-21 16:34:33.824697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.686 [2024-07-21 16:34:33.835313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.686 [2024-07-21 16:34:33.835363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.686 [2024-07-21 16:34:33.835375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.686 [2024-07-21 16:34:33.845838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.686 [2024-07-21 16:34:33.845889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.686 [2024-07-21 16:34:33.845901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.686 [2024-07-21 16:34:33.857290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.686 [2024-07-21 16:34:33.857340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.686 [2024-07-21 16:34:33.857352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.686 [2024-07-21 16:34:33.867795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.686 [2024-07-21 16:34:33.867845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.686 [2024-07-21 16:34:33.867867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.686 [2024-07-21 16:34:33.878108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.686 [2024-07-21 16:34:33.878159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.686 [2024-07-21 16:34:33.878172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.686 [2024-07-21 16:34:33.889799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.686 [2024-07-21 16:34:33.889851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.686 [2024-07-21 16:34:33.889864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.945 [2024-07-21 16:34:33.899501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.945 [2024-07-21 16:34:33.899554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.945 [2024-07-21 16:34:33.899572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.945 [2024-07-21 16:34:33.911770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.945 [2024-07-21 16:34:33.911822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.945 [2024-07-21 16:34:33.911844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.945 [2024-07-21 16:34:33.921210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.945 [2024-07-21 16:34:33.921297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.945 [2024-07-21 16:34:33.921310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.945 [2024-07-21 16:34:33.932541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.945 [2024-07-21 16:34:33.932593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.945 [2024-07-21 16:34:33.932607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.945 [2024-07-21 16:34:33.942675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.945 [2024-07-21 16:34:33.942727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.945 [2024-07-21 16:34:33.942746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.946 [2024-07-21 16:34:33.955120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.946 [2024-07-21 16:34:33.955174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.946 [2024-07-21 16:34:33.955193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.946 [2024-07-21 16:34:33.965706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.946 [2024-07-21 16:34:33.965755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.946 [2024-07-21 16:34:33.965777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.946 [2024-07-21 16:34:33.977359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.946 [2024-07-21 16:34:33.977411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.946 [2024-07-21 16:34:33.977423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.946 [2024-07-21 16:34:33.988300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.946 [2024-07-21 16:34:33.988351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.946 [2024-07-21 16:34:33.988363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.946 [2024-07-21 16:34:33.999596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.946 [2024-07-21 16:34:33.999648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.946 [2024-07-21 16:34:33.999660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.946 [2024-07-21 16:34:34.010979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.946 [2024-07-21 16:34:34.011031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.946 [2024-07-21 16:34:34.011044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.946 [2024-07-21 16:34:34.020595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.946 [2024-07-21 16:34:34.020647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.946 [2024-07-21 16:34:34.020659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.946 [2024-07-21 16:34:34.031928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.946 [2024-07-21 16:34:34.031980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.946 [2024-07-21 16:34:34.031992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.946 [2024-07-21 16:34:34.043671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.946 [2024-07-21 16:34:34.043722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.946 [2024-07-21 16:34:34.043735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.946 [2024-07-21 16:34:34.053283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.946 [2024-07-21 16:34:34.053333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.946 [2024-07-21 16:34:34.053348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.946 [2024-07-21 16:34:34.064317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.946 [2024-07-21 16:34:34.064368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.946 [2024-07-21 16:34:34.064380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.946 [2024-07-21 16:34:34.075381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.946 [2024-07-21 16:34:34.075432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.946 [2024-07-21 16:34:34.075445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.946 [2024-07-21 16:34:34.085370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.946 [2024-07-21 16:34:34.085421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.946 [2024-07-21 16:34:34.085433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.946 [2024-07-21 16:34:34.095722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.946 [2024-07-21 16:34:34.095774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.946 [2024-07-21 16:34:34.095786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.946 [2024-07-21 16:34:34.107538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.946 [2024-07-21 16:34:34.107591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.946 [2024-07-21 16:34:34.107603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.946 [2024-07-21 16:34:34.117398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.946 [2024-07-21 16:34:34.117448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.946 [2024-07-21 16:34:34.117460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.946 [2024-07-21 16:34:34.128804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.946 [2024-07-21 16:34:34.128857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.946 [2024-07-21 16:34:34.128877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.946 [2024-07-21 16:34:34.140912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.946 [2024-07-21 16:34:34.140964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.946 [2024-07-21 16:34:34.140976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:15.946 [2024-07-21 16:34:34.151413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:15.946 [2024-07-21 16:34:34.151465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:15.946 [2024-07-21 16:34:34.151478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.205 [2024-07-21 16:34:34.161048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.205 [2024-07-21 16:34:34.161100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.205 [2024-07-21 16:34:34.161113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.205 [2024-07-21 16:34:34.172521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.205 [2024-07-21 16:34:34.172572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.205 [2024-07-21 16:34:34.172585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.205 [2024-07-21 16:34:34.183661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.205 [2024-07-21 16:34:34.183712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.205 [2024-07-21 16:34:34.183725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.205 [2024-07-21 16:34:34.194624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.205 [2024-07-21 16:34:34.194677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.205 [2024-07-21 16:34:34.194690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.205 [2024-07-21 16:34:34.203937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.205 [2024-07-21 16:34:34.203987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:53 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.205 [2024-07-21 16:34:34.203999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.205 [2024-07-21 16:34:34.214362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.205 [2024-07-21 16:34:34.214413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.205 [2024-07-21 16:34:34.214426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.205 [2024-07-21 16:34:34.224329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.205 [2024-07-21 16:34:34.224379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.205 [2024-07-21 16:34:34.224391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.205 [2024-07-21 16:34:34.235824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.205 [2024-07-21 16:34:34.235876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.205 [2024-07-21 16:34:34.235888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.205 [2024-07-21 16:34:34.245443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.205 [2024-07-21 16:34:34.245494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.205 [2024-07-21 16:34:34.245506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.205 [2024-07-21 16:34:34.257756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.205 [2024-07-21 16:34:34.257808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.205 [2024-07-21 16:34:34.257821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.205 [2024-07-21 16:34:34.267512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.205 [2024-07-21 16:34:34.267561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.205 [2024-07-21 16:34:34.267574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.205 [2024-07-21 16:34:34.279452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.205 [2024-07-21 16:34:34.279503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.205 [2024-07-21 16:34:34.279516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.205 [2024-07-21 16:34:34.288768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.205 [2024-07-21 16:34:34.288820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.205 [2024-07-21 16:34:34.288833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.205 [2024-07-21 16:34:34.299629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.205 [2024-07-21 16:34:34.299681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.206 [2024-07-21 16:34:34.299693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.206 [2024-07-21 16:34:34.310367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.206 [2024-07-21 16:34:34.310419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.206 [2024-07-21 16:34:34.310431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.206 [2024-07-21 16:34:34.320455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.206 [2024-07-21 16:34:34.320506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.206 [2024-07-21 16:34:34.320519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.206 [2024-07-21 16:34:34.332105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.206 [2024-07-21 16:34:34.332158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.206 [2024-07-21 16:34:34.332171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.206 [2024-07-21 16:34:34.342204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.206 [2024-07-21 16:34:34.342256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.206 [2024-07-21 16:34:34.342279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.206 [2024-07-21 16:34:34.354317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.206 [2024-07-21 16:34:34.354391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.206 [2024-07-21 16:34:34.354403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.206 [2024-07-21 16:34:34.365196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.206 [2024-07-21 16:34:34.365248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.206 [2024-07-21 16:34:34.365275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.206 [2024-07-21 16:34:34.377539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.206 [2024-07-21 16:34:34.377590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.206 [2024-07-21 16:34:34.377602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.206 [2024-07-21 16:34:34.387917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.206 [2024-07-21 16:34:34.387968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.206 [2024-07-21 16:34:34.387981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.206 [2024-07-21 16:34:34.399204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.206 [2024-07-21 16:34:34.399255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.206 [2024-07-21 16:34:34.399279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.206 [2024-07-21 16:34:34.410607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.206 [2024-07-21 16:34:34.410659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.206 [2024-07-21 16:34:34.410672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.464 [2024-07-21 16:34:34.420511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.464 [2024-07-21 16:34:34.420563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.464 [2024-07-21 16:34:34.420575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.464 [2024-07-21 16:34:34.431949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.464 [2024-07-21 16:34:34.432002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.464 [2024-07-21 16:34:34.432014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.464 [2024-07-21 16:34:34.442946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.464 [2024-07-21 16:34:34.442997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.464 [2024-07-21 16:34:34.443009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.464 [2024-07-21 16:34:34.453787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.464 [2024-07-21 16:34:34.453839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.464 [2024-07-21 16:34:34.453852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.464 [2024-07-21 16:34:34.464077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.464 [2024-07-21 16:34:34.464128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.464 [2024-07-21 16:34:34.464141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.464 [2024-07-21 16:34:34.475938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.464 [2024-07-21 16:34:34.475989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.464 [2024-07-21 16:34:34.476002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.464 [2024-07-21 16:34:34.488091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.464 [2024-07-21 16:34:34.488143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.464 [2024-07-21 16:34:34.488155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.464 [2024-07-21 16:34:34.497046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.464 [2024-07-21 16:34:34.497099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.464 [2024-07-21 16:34:34.497111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.464 [2024-07-21 16:34:34.508675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.464 [2024-07-21 16:34:34.508727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.464 [2024-07-21 16:34:34.508740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.464 [2024-07-21 16:34:34.518340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.464 [2024-07-21 16:34:34.518390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.464 [2024-07-21 16:34:34.518403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.464 [2024-07-21 16:34:34.529411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.464 [2024-07-21 16:34:34.529461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.464 [2024-07-21 16:34:34.529473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.464 [2024-07-21 16:34:34.539873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.464 [2024-07-21 16:34:34.539925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.464 [2024-07-21 16:34:34.539938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.464 [2024-07-21 16:34:34.550044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.464 [2024-07-21 16:34:34.550097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.464 [2024-07-21 16:34:34.550109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.464 [2024-07-21 16:34:34.562365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.464 [2024-07-21 16:34:34.562416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.464 [2024-07-21 16:34:34.562428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.464 [2024-07-21 16:34:34.571293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.464 [2024-07-21 16:34:34.571347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.464 [2024-07-21 16:34:34.571368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.464 [2024-07-21 16:34:34.584425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.464 [2024-07-21 16:34:34.584477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.464 [2024-07-21 16:34:34.584490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.464 [2024-07-21 16:34:34.594303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.464 [2024-07-21 16:34:34.594360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.464 [2024-07-21 16:34:34.594373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.464 [2024-07-21 16:34:34.605363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.464 [2024-07-21 16:34:34.605413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.464 [2024-07-21 16:34:34.605425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.464 [2024-07-21 16:34:34.616042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.464 [2024-07-21 16:34:34.616095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.464 [2024-07-21 16:34:34.616108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.464 [2024-07-21 16:34:34.627728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.464 [2024-07-21 16:34:34.627779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.464 [2024-07-21 16:34:34.627792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.464 [2024-07-21 16:34:34.639061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.464 [2024-07-21 16:34:34.639113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.464 [2024-07-21 16:34:34.639125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.464 [2024-07-21 16:34:34.648301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.464 [2024-07-21 16:34:34.648352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.464 [2024-07-21 16:34:34.648365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.464 [2024-07-21 16:34:34.660639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.464 [2024-07-21 16:34:34.660691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.464 [2024-07-21 16:34:34.660704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.722 [2024-07-21 16:34:34.671042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.722 [2024-07-21 16:34:34.671093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.722 [2024-07-21 16:34:34.671106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.722 [2024-07-21 16:34:34.682100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.722 [2024-07-21 16:34:34.682153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.722 [2024-07-21 16:34:34.682165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.722 [2024-07-21 16:34:34.693401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.722 [2024-07-21 16:34:34.693455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.722 [2024-07-21 16:34:34.693468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.722 [2024-07-21 16:34:34.704156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.722 [2024-07-21 16:34:34.704208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.722 [2024-07-21 16:34:34.704220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.722 [2024-07-21 16:34:34.715182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.722 [2024-07-21 16:34:34.715234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.722 [2024-07-21 16:34:34.715247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.722 [2024-07-21 16:34:34.725158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.722 [2024-07-21 16:34:34.725211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.722 [2024-07-21 16:34:34.725225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.722 [2024-07-21 16:34:34.734689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.722 [2024-07-21 16:34:34.734740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.722 [2024-07-21 16:34:34.734752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.722 [2024-07-21 16:34:34.745298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.722 [2024-07-21 16:34:34.745366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.722 [2024-07-21 16:34:34.745378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.722 [2024-07-21 16:34:34.758697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.722 [2024-07-21 16:34:34.758749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.722 [2024-07-21 16:34:34.758769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.722 [2024-07-21 16:34:34.770704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.722 [2024-07-21 16:34:34.770756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.722 [2024-07-21 16:34:34.770769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.722 [2024-07-21 16:34:34.780910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.722 [2024-07-21 16:34:34.780963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.722 [2024-07-21 16:34:34.780975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.722 [2024-07-21 16:34:34.792883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.722 [2024-07-21 16:34:34.792935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.722 [2024-07-21 16:34:34.792947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.722 [2024-07-21 16:34:34.803458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.722 [2024-07-21 16:34:34.803509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.722 [2024-07-21 16:34:34.803522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.722 [2024-07-21 16:34:34.815200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.722 [2024-07-21 16:34:34.815251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.722 [2024-07-21 16:34:34.815275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.722 [2024-07-21 16:34:34.824480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.722 [2024-07-21 16:34:34.824534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.722 [2024-07-21 16:34:34.824546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.722 [2024-07-21 16:34:34.836038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.722 [2024-07-21 16:34:34.836091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.722 [2024-07-21 16:34:34.836103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.722 [2024-07-21 16:34:34.847417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.722 [2024-07-21 16:34:34.847469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.722 [2024-07-21 16:34:34.847481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.722 [2024-07-21 16:34:34.856914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.722 [2024-07-21 16:34:34.856965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.722 [2024-07-21 16:34:34.856977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.722 [2024-07-21 16:34:34.869037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.723 [2024-07-21 16:34:34.869091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.723 [2024-07-21 16:34:34.869103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.723 [2024-07-21 16:34:34.879801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.723 [2024-07-21 16:34:34.879852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.723 [2024-07-21 16:34:34.879865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.723 [2024-07-21 16:34:34.889544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.723 [2024-07-21 16:34:34.889596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.723 [2024-07-21 16:34:34.889609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.723 [2024-07-21 16:34:34.898981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.723 [2024-07-21 16:34:34.899032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.723 [2024-07-21 16:34:34.899044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.723 [2024-07-21 16:34:34.911816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.723 [2024-07-21 16:34:34.911868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.723 [2024-07-21 16:34:34.911880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.723 [2024-07-21 16:34:34.922607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.723 [2024-07-21 16:34:34.922659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.723 [2024-07-21 16:34:34.922672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.980 [2024-07-21 16:34:34.932947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.981 [2024-07-21 16:34:34.932998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.981 [2024-07-21 16:34:34.933030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.981 [2024-07-21 16:34:34.943813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.981 [2024-07-21 16:34:34.943865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.981 [2024-07-21 16:34:34.943877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.981 [2024-07-21 16:34:34.954452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.981 [2024-07-21 16:34:34.954503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.981 [2024-07-21 16:34:34.954516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.981 [2024-07-21 16:34:34.965969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x251d3e0) 00:19:16.981 [2024-07-21 16:34:34.966018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.981 [2024-07-21 16:34:34.966030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.981 00:19:16.981 Latency(us) 00:19:16.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.981 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:16.981 nvme0n1 : 2.00 23383.86 91.34 0.00 0.00 5467.76 2725.70 14656.23 00:19:16.981 =================================================================================================================== 00:19:16.981 Total : 23383.86 91.34 0.00 0.00 5467.76 2725.70 14656.23 00:19:16.981 0 00:19:16.981 16:34:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:16.981 16:34:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:16.981 16:34:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:16.981 16:34:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:16.981 | .driver_specific 00:19:16.981 | .nvme_error 00:19:16.981 | .status_code 00:19:16.981 | .command_transient_transport_error' 00:19:17.239 16:34:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 183 > 0 )) 00:19:17.239 16:34:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93729 00:19:17.239 16:34:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93729 ']' 00:19:17.239 16:34:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93729 00:19:17.239 16:34:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:19:17.239 16:34:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:17.239 16:34:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93729 00:19:17.239 16:34:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:17.239 16:34:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:17.239 killing process with pid 93729 00:19:17.239 16:34:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93729' 00:19:17.239 Received shutdown signal, test time was about 2.000000 seconds 00:19:17.239 00:19:17.239 Latency(us) 00:19:17.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.239 =================================================================================================================== 00:19:17.239 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:17.239 16:34:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93729 00:19:17.239 16:34:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93729 00:19:17.497 16:34:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:19:17.497 16:34:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:17.497 16:34:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:17.497 16:34:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:17.497 16:34:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:17.497 16:34:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93819 00:19:17.497 16:34:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:19:17.497 16:34:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93819 /var/tmp/bperf.sock 00:19:17.497 16:34:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93819 ']' 00:19:17.497 16:34:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:17.497 16:34:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:17.497 16:34:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:17.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:17.497 16:34:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:17.497 16:34:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:17.497 [2024-07-21 16:34:35.582637] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:19:17.497 [2024-07-21 16:34:35.582792] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:19:17.497 Zero copy mechanism will not be used. 00:19:17.497 llocations --file-prefix=spdk_pid93819 ] 00:19:17.755 [2024-07-21 16:34:35.716440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.755 [2024-07-21 16:34:35.812525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.319 16:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:18.319 16:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:19:18.319 16:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:18.319 16:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:18.885 16:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:18.885 16:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.885 16:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:18.885 16:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.885 16:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:18.885 16:34:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:18.885 nvme0n1 00:19:19.145 16:34:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:19.145 16:34:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.145 16:34:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:19.145 16:34:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.145 16:34:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:19.145 16:34:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:19.145 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:19.145 Zero copy mechanism will not be used. 00:19:19.145 Running I/O for 2 seconds... 00:19:19.145 [2024-07-21 16:34:37.200493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.145 [2024-07-21 16:34:37.200561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.145 [2024-07-21 16:34:37.200576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.145 [2024-07-21 16:34:37.204770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.145 [2024-07-21 16:34:37.204824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.145 [2024-07-21 16:34:37.204836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.145 [2024-07-21 16:34:37.208926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.145 [2024-07-21 16:34:37.208979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.145 [2024-07-21 16:34:37.208991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.145 [2024-07-21 16:34:37.212727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.145 [2024-07-21 16:34:37.212778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.145 [2024-07-21 16:34:37.212790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.145 [2024-07-21 16:34:37.215500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.145 [2024-07-21 16:34:37.215550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.145 [2024-07-21 16:34:37.215561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.145 [2024-07-21 16:34:37.218862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.145 [2024-07-21 16:34:37.218913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.145 [2024-07-21 16:34:37.218924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.145 [2024-07-21 16:34:37.222804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.145 [2024-07-21 16:34:37.222856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.145 [2024-07-21 16:34:37.222868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.145 [2024-07-21 16:34:37.226538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.145 [2024-07-21 16:34:37.226575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.145 [2024-07-21 16:34:37.226586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.145 [2024-07-21 16:34:37.229540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.145 [2024-07-21 16:34:37.229589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.145 [2024-07-21 16:34:37.229602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.145 [2024-07-21 16:34:37.233462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.145 [2024-07-21 16:34:37.233513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.145 [2024-07-21 16:34:37.233525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.145 [2024-07-21 16:34:37.237065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.145 [2024-07-21 16:34:37.237117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.145 [2024-07-21 16:34:37.237129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.145 [2024-07-21 16:34:37.240611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.145 [2024-07-21 16:34:37.240663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.145 [2024-07-21 16:34:37.240674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.145 [2024-07-21 16:34:37.244140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.145 [2024-07-21 16:34:37.244191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.145 [2024-07-21 16:34:37.244203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.145 [2024-07-21 16:34:37.247476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.145 [2024-07-21 16:34:37.247529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.145 [2024-07-21 16:34:37.247541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.145 [2024-07-21 16:34:37.250956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.145 [2024-07-21 16:34:37.251007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.145 [2024-07-21 16:34:37.251019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.145 [2024-07-21 16:34:37.254609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.145 [2024-07-21 16:34:37.254644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.145 [2024-07-21 16:34:37.254655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.145 [2024-07-21 16:34:37.257370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.145 [2024-07-21 16:34:37.257418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.145 [2024-07-21 16:34:37.257429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.145 [2024-07-21 16:34:37.260839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.145 [2024-07-21 16:34:37.260891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.145 [2024-07-21 16:34:37.260903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.145 [2024-07-21 16:34:37.264866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.145 [2024-07-21 16:34:37.264918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.145 [2024-07-21 16:34:37.264929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.145 [2024-07-21 16:34:37.267671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.145 [2024-07-21 16:34:37.267722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.145 [2024-07-21 16:34:37.267734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.145 [2024-07-21 16:34:37.271177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.145 [2024-07-21 16:34:37.271228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.145 [2024-07-21 16:34:37.271239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.145 [2024-07-21 16:34:37.275180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.146 [2024-07-21 16:34:37.275232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.146 [2024-07-21 16:34:37.275243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.146 [2024-07-21 16:34:37.277781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.146 [2024-07-21 16:34:37.277828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.146 [2024-07-21 16:34:37.277840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.146 [2024-07-21 16:34:37.281741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.146 [2024-07-21 16:34:37.281792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.146 [2024-07-21 16:34:37.281804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.146 [2024-07-21 16:34:37.285583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.146 [2024-07-21 16:34:37.285634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.146 [2024-07-21 16:34:37.285646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.146 [2024-07-21 16:34:37.288165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.146 [2024-07-21 16:34:37.288216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.146 [2024-07-21 16:34:37.288227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.146 [2024-07-21 16:34:37.291409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.146 [2024-07-21 16:34:37.291460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.146 [2024-07-21 16:34:37.291471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.146 [2024-07-21 16:34:37.295191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.146 [2024-07-21 16:34:37.295242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.146 [2024-07-21 16:34:37.295254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.146 [2024-07-21 16:34:37.298032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.146 [2024-07-21 16:34:37.298080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.146 [2024-07-21 16:34:37.298092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.146 [2024-07-21 16:34:37.301096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.146 [2024-07-21 16:34:37.301145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.146 [2024-07-21 16:34:37.301157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.146 [2024-07-21 16:34:37.304862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.146 [2024-07-21 16:34:37.304915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.146 [2024-07-21 16:34:37.304926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.146 [2024-07-21 16:34:37.307808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.146 [2024-07-21 16:34:37.307859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.146 [2024-07-21 16:34:37.307871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.146 [2024-07-21 16:34:37.311205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.146 [2024-07-21 16:34:37.311257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.146 [2024-07-21 16:34:37.311290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.146 [2024-07-21 16:34:37.315074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.146 [2024-07-21 16:34:37.315126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.146 [2024-07-21 16:34:37.315145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.146 [2024-07-21 16:34:37.318159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.146 [2024-07-21 16:34:37.318207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.146 [2024-07-21 16:34:37.318228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.146 [2024-07-21 16:34:37.322051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.146 [2024-07-21 16:34:37.322100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.146 [2024-07-21 16:34:37.322120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.146 [2024-07-21 16:34:37.325952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.146 [2024-07-21 16:34:37.326001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.146 [2024-07-21 16:34:37.326021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.146 [2024-07-21 16:34:37.330020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.146 [2024-07-21 16:34:37.330070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.146 [2024-07-21 16:34:37.330090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.146 [2024-07-21 16:34:37.332861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.146 [2024-07-21 16:34:37.332909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.146 [2024-07-21 16:34:37.332921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.146 [2024-07-21 16:34:37.336326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.146 [2024-07-21 16:34:37.336377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.146 [2024-07-21 16:34:37.336397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.146 [2024-07-21 16:34:37.340112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.146 [2024-07-21 16:34:37.340161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.146 [2024-07-21 16:34:37.340181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.146 [2024-07-21 16:34:37.343331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.146 [2024-07-21 16:34:37.343381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.146 [2024-07-21 16:34:37.343393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.146 [2024-07-21 16:34:37.346624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.146 [2024-07-21 16:34:37.346691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.146 [2024-07-21 16:34:37.346702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.146 [2024-07-21 16:34:37.350554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.146 [2024-07-21 16:34:37.350605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.146 [2024-07-21 16:34:37.350617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.407 [2024-07-21 16:34:37.354443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.407 [2024-07-21 16:34:37.354494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.407 [2024-07-21 16:34:37.354506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.407 [2024-07-21 16:34:37.357299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.407 [2024-07-21 16:34:37.357346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.407 [2024-07-21 16:34:37.357357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.407 [2024-07-21 16:34:37.361089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.407 [2024-07-21 16:34:37.361140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.407 [2024-07-21 16:34:37.361158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.407 [2024-07-21 16:34:37.364407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.407 [2024-07-21 16:34:37.364458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.364470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.367161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.367212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.367223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.371243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.371303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.371315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.373973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.374022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.374033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.377608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.377643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.377654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.380807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.380858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.380870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.383846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.383897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.383908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.386987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.387038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.387049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.390718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.390773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.390784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.393803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.393850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.393861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.397825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.397884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.397896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.400717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.400766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.400777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.404388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.404419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.404430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.407675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.407726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.407737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.410393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.410441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.410453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.413958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.414006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.414018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.417011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.417061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.417072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.420623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.420675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.420695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.423643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.423693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.423704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.426790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.426840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.426851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.429862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.429909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.429921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.433084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.433135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.433147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.436676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.436726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.436746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.439898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.439949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.439960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.442941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.442992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.443004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.446501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.446553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.446572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.450195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.450244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.450274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.453592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.453641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.453652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.457399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.457451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.457463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.460721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.460772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.460784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.464346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.464396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.464407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.467582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.467632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.467643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.471331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.471382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.471393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.474160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.474208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.474219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.477750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.477800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.477811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.481842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.481894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.408 [2024-07-21 16:34:37.481905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.408 [2024-07-21 16:34:37.484664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.408 [2024-07-21 16:34:37.484713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.484725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.488106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.488157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.488169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.490902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.490952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.490964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.494349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.494397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.494408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.497629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.497677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.497689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.501097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.501148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.501160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.503942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.503991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.504002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.507392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.507442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.507453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.510554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.510605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.510616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.514184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.514233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.514244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.517467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.517515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.517535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.520921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.520971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.520983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.523607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.523657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.523668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.527176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.527227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.527239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.531155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.531206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.531218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.535085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.535137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.535148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.537857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.537904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.537915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.541248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.541308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.541320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.545054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.545104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.545115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.548548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.548598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.548610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.551592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.551643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.551654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.554755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.554805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.554817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.558096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.558144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.558156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.560937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.560985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.560996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.564673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.564723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.564734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.567723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.567773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.567784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.571349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.571399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.571410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.573890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.573937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.573949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.576866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.576917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.576936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.580315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.580365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.580385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.584051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.584102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.584113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.587269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.587329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.587350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.591204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.591255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.591286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.593951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.594000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.594019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.597706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.597756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.597775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.601031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.601081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.409 [2024-07-21 16:34:37.601094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.409 [2024-07-21 16:34:37.604148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.409 [2024-07-21 16:34:37.604199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.410 [2024-07-21 16:34:37.604218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.410 [2024-07-21 16:34:37.607746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.410 [2024-07-21 16:34:37.607798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.410 [2024-07-21 16:34:37.607818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.410 [2024-07-21 16:34:37.611059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.410 [2024-07-21 16:34:37.611109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.410 [2024-07-21 16:34:37.611128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.670 [2024-07-21 16:34:37.613874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.670 [2024-07-21 16:34:37.613922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.670 [2024-07-21 16:34:37.613941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.670 [2024-07-21 16:34:37.617663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.670 [2024-07-21 16:34:37.617717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.670 [2024-07-21 16:34:37.617729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.670 [2024-07-21 16:34:37.620486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.670 [2024-07-21 16:34:37.620536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.670 [2024-07-21 16:34:37.620547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.670 [2024-07-21 16:34:37.624219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.670 [2024-07-21 16:34:37.624288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.670 [2024-07-21 16:34:37.624301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.670 [2024-07-21 16:34:37.627617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.670 [2024-07-21 16:34:37.627667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.670 [2024-07-21 16:34:37.627679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.670 [2024-07-21 16:34:37.630896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.670 [2024-07-21 16:34:37.630945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.670 [2024-07-21 16:34:37.630956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.670 [2024-07-21 16:34:37.634316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.670 [2024-07-21 16:34:37.634372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.670 [2024-07-21 16:34:37.634383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.670 [2024-07-21 16:34:37.637360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.670 [2024-07-21 16:34:37.637407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.670 [2024-07-21 16:34:37.637419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.670 [2024-07-21 16:34:37.640461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.670 [2024-07-21 16:34:37.640511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.670 [2024-07-21 16:34:37.640523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.670 [2024-07-21 16:34:37.644051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.670 [2024-07-21 16:34:37.644102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.670 [2024-07-21 16:34:37.644113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.670 [2024-07-21 16:34:37.647239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.670 [2024-07-21 16:34:37.647304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.670 [2024-07-21 16:34:37.647315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.670 [2024-07-21 16:34:37.650836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.670 [2024-07-21 16:34:37.650885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.670 [2024-07-21 16:34:37.650905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.670 [2024-07-21 16:34:37.653890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.670 [2024-07-21 16:34:37.653937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.670 [2024-07-21 16:34:37.653948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.670 [2024-07-21 16:34:37.656838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.670 [2024-07-21 16:34:37.656887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.670 [2024-07-21 16:34:37.656900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.670 [2024-07-21 16:34:37.660404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.670 [2024-07-21 16:34:37.660453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.671 [2024-07-21 16:34:37.660472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.671 [2024-07-21 16:34:37.663675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.671 [2024-07-21 16:34:37.663726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.671 [2024-07-21 16:34:37.663745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.671 [2024-07-21 16:34:37.666931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.671 [2024-07-21 16:34:37.666981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.671 [2024-07-21 16:34:37.666992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.671 [2024-07-21 16:34:37.670201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.671 [2024-07-21 16:34:37.670249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.671 [2024-07-21 16:34:37.670279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.671 [2024-07-21 16:34:37.674318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.671 [2024-07-21 16:34:37.674382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.671 [2024-07-21 16:34:37.674394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.671 [2024-07-21 16:34:37.677169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.671 [2024-07-21 16:34:37.677216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.671 [2024-07-21 16:34:37.677226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.671 [2024-07-21 16:34:37.680958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.671 [2024-07-21 16:34:37.681009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.671 [2024-07-21 16:34:37.681020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.671 [2024-07-21 16:34:37.684856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.671 [2024-07-21 16:34:37.684906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.671 [2024-07-21 16:34:37.684918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.671 [2024-07-21 16:34:37.687582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.671 [2024-07-21 16:34:37.687634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.671 [2024-07-21 16:34:37.687646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.671 [2024-07-21 16:34:37.691328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.671 [2024-07-21 16:34:37.691394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.671 [2024-07-21 16:34:37.691406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.671 [2024-07-21 16:34:37.695536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.671 [2024-07-21 16:34:37.695586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.671 [2024-07-21 16:34:37.695599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.671 [2024-07-21 16:34:37.699361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.671 [2024-07-21 16:34:37.699411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.671 [2024-07-21 16:34:37.699422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.671 [2024-07-21 16:34:37.701787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.671 [2024-07-21 16:34:37.701834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.671 [2024-07-21 16:34:37.701845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.671 [2024-07-21 16:34:37.705700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.671 [2024-07-21 16:34:37.705750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.671 [2024-07-21 16:34:37.705762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.671 [2024-07-21 16:34:37.708745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.671 [2024-07-21 16:34:37.708796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.671 [2024-07-21 16:34:37.708807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.671 [2024-07-21 16:34:37.711449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.671 [2024-07-21 16:34:37.711498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.671 [2024-07-21 16:34:37.711510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.671 [2024-07-21 16:34:37.715391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.671 [2024-07-21 16:34:37.715442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.671 [2024-07-21 16:34:37.715453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.671 [2024-07-21 16:34:37.719271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.671 [2024-07-21 16:34:37.719318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.671 [2024-07-21 16:34:37.719330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.671 [2024-07-21 16:34:37.722190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.671 [2024-07-21 16:34:37.722237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.671 [2024-07-21 16:34:37.722248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.671 [2024-07-21 16:34:37.725800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.671 [2024-07-21 16:34:37.725849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.671 [2024-07-21 16:34:37.725860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.671 [2024-07-21 16:34:37.729865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.671 [2024-07-21 16:34:37.729916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.671 [2024-07-21 16:34:37.729928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.671 [2024-07-21 16:34:37.733999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.671 [2024-07-21 16:34:37.734050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.671 [2024-07-21 16:34:37.734061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.671 [2024-07-21 16:34:37.737045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.671 [2024-07-21 16:34:37.737095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.671 [2024-07-21 16:34:37.737115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.671 [2024-07-21 16:34:37.740541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.671 [2024-07-21 16:34:37.740591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.671 [2024-07-21 16:34:37.740603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.671 [2024-07-21 16:34:37.744541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.671 [2024-07-21 16:34:37.744591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.671 [2024-07-21 16:34:37.744603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.671 [2024-07-21 16:34:37.747371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.671 [2024-07-21 16:34:37.747420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.671 [2024-07-21 16:34:37.747432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.671 [2024-07-21 16:34:37.750803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.671 [2024-07-21 16:34:37.750853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.671 [2024-07-21 16:34:37.750865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.671 [2024-07-21 16:34:37.753801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.671 [2024-07-21 16:34:37.753847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.671 [2024-07-21 16:34:37.753859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.671 [2024-07-21 16:34:37.756766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.671 [2024-07-21 16:34:37.756817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.671 [2024-07-21 16:34:37.756828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.671 [2024-07-21 16:34:37.760467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.671 [2024-07-21 16:34:37.760516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.760528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.672 [2024-07-21 16:34:37.763905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.672 [2024-07-21 16:34:37.763954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.763966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.672 [2024-07-21 16:34:37.766626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.672 [2024-07-21 16:34:37.766661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.766673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.672 [2024-07-21 16:34:37.769944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.672 [2024-07-21 16:34:37.769992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.770003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.672 [2024-07-21 16:34:37.773152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.672 [2024-07-21 16:34:37.773200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.773211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.672 [2024-07-21 16:34:37.776238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.672 [2024-07-21 16:34:37.776299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.776311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.672 [2024-07-21 16:34:37.779655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.672 [2024-07-21 16:34:37.779717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.779729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.672 [2024-07-21 16:34:37.783551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.672 [2024-07-21 16:34:37.783586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.783597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.672 [2024-07-21 16:34:37.786104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.672 [2024-07-21 16:34:37.786151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.786163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.672 [2024-07-21 16:34:37.789569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.672 [2024-07-21 16:34:37.789618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.789630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.672 [2024-07-21 16:34:37.793699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.672 [2024-07-21 16:34:37.793748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.793760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.672 [2024-07-21 16:34:37.797976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.672 [2024-07-21 16:34:37.798025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.798037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.672 [2024-07-21 16:34:37.801038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.672 [2024-07-21 16:34:37.801087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.801098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.672 [2024-07-21 16:34:37.804457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.672 [2024-07-21 16:34:37.804506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.804517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.672 [2024-07-21 16:34:37.808240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.672 [2024-07-21 16:34:37.808299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.808311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.672 [2024-07-21 16:34:37.812247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.672 [2024-07-21 16:34:37.812306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.812319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.672 [2024-07-21 16:34:37.816560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.672 [2024-07-21 16:34:37.816609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.816621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.672 [2024-07-21 16:34:37.819296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.672 [2024-07-21 16:34:37.819334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.819347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.672 [2024-07-21 16:34:37.822751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.672 [2024-07-21 16:34:37.822800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.822811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.672 [2024-07-21 16:34:37.826464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.672 [2024-07-21 16:34:37.826498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.826510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.672 [2024-07-21 16:34:37.829158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.672 [2024-07-21 16:34:37.829204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.829215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.672 [2024-07-21 16:34:37.832201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.672 [2024-07-21 16:34:37.832251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.832272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.672 [2024-07-21 16:34:37.835465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.672 [2024-07-21 16:34:37.835514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.835526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.672 [2024-07-21 16:34:37.839144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.672 [2024-07-21 16:34:37.839193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.839205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.672 [2024-07-21 16:34:37.842368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.672 [2024-07-21 16:34:37.842401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.842413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.672 [2024-07-21 16:34:37.845843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.672 [2024-07-21 16:34:37.845890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.845901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.672 [2024-07-21 16:34:37.848854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.672 [2024-07-21 16:34:37.848901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.848913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.672 [2024-07-21 16:34:37.852780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.672 [2024-07-21 16:34:37.852829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.852840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.672 [2024-07-21 16:34:37.855732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.672 [2024-07-21 16:34:37.855781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.855793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.672 [2024-07-21 16:34:37.859248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.672 [2024-07-21 16:34:37.859308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.672 [2024-07-21 16:34:37.859320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.673 [2024-07-21 16:34:37.862793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.673 [2024-07-21 16:34:37.862841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.673 [2024-07-21 16:34:37.862853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.673 [2024-07-21 16:34:37.866115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.673 [2024-07-21 16:34:37.866163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.673 [2024-07-21 16:34:37.866175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.673 [2024-07-21 16:34:37.869175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.673 [2024-07-21 16:34:37.869224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.673 [2024-07-21 16:34:37.869235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.673 [2024-07-21 16:34:37.872765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.673 [2024-07-21 16:34:37.872814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.673 [2024-07-21 16:34:37.872826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.673 [2024-07-21 16:34:37.875989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.673 [2024-07-21 16:34:37.876037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.673 [2024-07-21 16:34:37.876048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.933 [2024-07-21 16:34:37.879416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.933 [2024-07-21 16:34:37.879451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.933 [2024-07-21 16:34:37.879462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.933 [2024-07-21 16:34:37.883066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.933 [2024-07-21 16:34:37.883114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.933 [2024-07-21 16:34:37.883125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.933 [2024-07-21 16:34:37.885780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.933 [2024-07-21 16:34:37.885827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.933 [2024-07-21 16:34:37.885839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.933 [2024-07-21 16:34:37.889371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.933 [2024-07-21 16:34:37.889420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.933 [2024-07-21 16:34:37.889431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.933 [2024-07-21 16:34:37.893594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.933 [2024-07-21 16:34:37.893642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.933 [2024-07-21 16:34:37.893654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.933 [2024-07-21 16:34:37.897416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.933 [2024-07-21 16:34:37.897480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.933 [2024-07-21 16:34:37.897492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.933 [2024-07-21 16:34:37.900188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.933 [2024-07-21 16:34:37.900236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.933 [2024-07-21 16:34:37.900248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.933 [2024-07-21 16:34:37.903841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.933 [2024-07-21 16:34:37.903889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.933 [2024-07-21 16:34:37.903901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.933 [2024-07-21 16:34:37.907816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.933 [2024-07-21 16:34:37.907864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.933 [2024-07-21 16:34:37.907875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.933 [2024-07-21 16:34:37.910878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.933 [2024-07-21 16:34:37.910926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.934 [2024-07-21 16:34:37.910937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.934 [2024-07-21 16:34:37.914518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.934 [2024-07-21 16:34:37.914557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.934 [2024-07-21 16:34:37.914569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.934 [2024-07-21 16:34:37.918140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.934 [2024-07-21 16:34:37.918188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.934 [2024-07-21 16:34:37.918199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.934 [2024-07-21 16:34:37.920812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.934 [2024-07-21 16:34:37.920860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.934 [2024-07-21 16:34:37.920872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.934 [2024-07-21 16:34:37.924247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.934 [2024-07-21 16:34:37.924304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.934 [2024-07-21 16:34:37.924316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.934 [2024-07-21 16:34:37.928043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.934 [2024-07-21 16:34:37.928092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.934 [2024-07-21 16:34:37.928104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.934 [2024-07-21 16:34:37.931133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.934 [2024-07-21 16:34:37.931182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.934 [2024-07-21 16:34:37.931193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.934 [2024-07-21 16:34:37.934923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.934 [2024-07-21 16:34:37.934973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.934 [2024-07-21 16:34:37.934985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.934 [2024-07-21 16:34:37.938597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.934 [2024-07-21 16:34:37.938633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.934 [2024-07-21 16:34:37.938644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.934 [2024-07-21 16:34:37.941526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.934 [2024-07-21 16:34:37.941559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.934 [2024-07-21 16:34:37.941570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.934 [2024-07-21 16:34:37.944341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.934 [2024-07-21 16:34:37.944390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.934 [2024-07-21 16:34:37.944410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.934 [2024-07-21 16:34:37.947582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.934 [2024-07-21 16:34:37.947632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.934 [2024-07-21 16:34:37.947643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.934 [2024-07-21 16:34:37.950972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.934 [2024-07-21 16:34:37.951021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.934 [2024-07-21 16:34:37.951032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.934 [2024-07-21 16:34:37.954179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.934 [2024-07-21 16:34:37.954232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.934 [2024-07-21 16:34:37.954243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.934 [2024-07-21 16:34:37.957792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.934 [2024-07-21 16:34:37.957841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.934 [2024-07-21 16:34:37.957853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.934 [2024-07-21 16:34:37.961062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.934 [2024-07-21 16:34:37.961111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.934 [2024-07-21 16:34:37.961123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.934 [2024-07-21 16:34:37.964135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.934 [2024-07-21 16:34:37.964184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.934 [2024-07-21 16:34:37.964195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.934 [2024-07-21 16:34:37.967399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.934 [2024-07-21 16:34:37.967448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.934 [2024-07-21 16:34:37.967460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.934 [2024-07-21 16:34:37.970557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.934 [2024-07-21 16:34:37.970592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.934 [2024-07-21 16:34:37.970604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.934 [2024-07-21 16:34:37.974143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.934 [2024-07-21 16:34:37.974191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.934 [2024-07-21 16:34:37.974203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.934 [2024-07-21 16:34:37.977738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.934 [2024-07-21 16:34:37.977786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.934 [2024-07-21 16:34:37.977797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.934 [2024-07-21 16:34:37.980819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.934 [2024-07-21 16:34:37.980867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.934 [2024-07-21 16:34:37.980878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.934 [2024-07-21 16:34:37.983708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.934 [2024-07-21 16:34:37.983756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.934 [2024-07-21 16:34:37.983768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.934 [2024-07-21 16:34:37.986677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.934 [2024-07-21 16:34:37.986726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.934 [2024-07-21 16:34:37.986738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.934 [2024-07-21 16:34:37.989604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.934 [2024-07-21 16:34:37.989652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.934 [2024-07-21 16:34:37.989663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.934 [2024-07-21 16:34:37.993279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.934 [2024-07-21 16:34:37.993336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.934 [2024-07-21 16:34:37.993348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.934 [2024-07-21 16:34:37.996852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.934 [2024-07-21 16:34:37.996901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.934 [2024-07-21 16:34:37.996912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.934 [2024-07-21 16:34:38.000251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.934 [2024-07-21 16:34:38.000310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.934 [2024-07-21 16:34:38.000322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.934 [2024-07-21 16:34:38.004111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.934 [2024-07-21 16:34:38.004160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.934 [2024-07-21 16:34:38.004172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.934 [2024-07-21 16:34:38.006913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.934 [2024-07-21 16:34:38.006960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.006971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.935 [2024-07-21 16:34:38.011469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.935 [2024-07-21 16:34:38.011518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.011530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.935 [2024-07-21 16:34:38.015345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.935 [2024-07-21 16:34:38.015393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.015404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.935 [2024-07-21 16:34:38.017874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.935 [2024-07-21 16:34:38.017921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.017932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.935 [2024-07-21 16:34:38.021752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.935 [2024-07-21 16:34:38.021800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.021811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.935 [2024-07-21 16:34:38.024727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.935 [2024-07-21 16:34:38.024775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.024787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.935 [2024-07-21 16:34:38.028297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.935 [2024-07-21 16:34:38.028346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.028357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.935 [2024-07-21 16:34:38.031210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.935 [2024-07-21 16:34:38.031275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.031288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.935 [2024-07-21 16:34:38.035169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.935 [2024-07-21 16:34:38.035218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.035230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.935 [2024-07-21 16:34:38.038287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.935 [2024-07-21 16:34:38.038350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.038363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.935 [2024-07-21 16:34:38.042019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.935 [2024-07-21 16:34:38.042067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.042079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.935 [2024-07-21 16:34:38.045961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.935 [2024-07-21 16:34:38.046011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.046023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.935 [2024-07-21 16:34:38.048780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.935 [2024-07-21 16:34:38.048825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.048837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.935 [2024-07-21 16:34:38.052698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.935 [2024-07-21 16:34:38.052747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.052759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.935 [2024-07-21 16:34:38.056306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.935 [2024-07-21 16:34:38.056354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.056366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.935 [2024-07-21 16:34:38.059585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.935 [2024-07-21 16:34:38.059635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.059646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.935 [2024-07-21 16:34:38.062845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.935 [2024-07-21 16:34:38.062893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.062904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.935 [2024-07-21 16:34:38.065841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.935 [2024-07-21 16:34:38.065889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.065900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.935 [2024-07-21 16:34:38.069514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.935 [2024-07-21 16:34:38.069565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.069576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.935 [2024-07-21 16:34:38.072939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.935 [2024-07-21 16:34:38.072988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.073008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.935 [2024-07-21 16:34:38.076566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.935 [2024-07-21 16:34:38.076615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.076627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.935 [2024-07-21 16:34:38.080529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.935 [2024-07-21 16:34:38.080579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.080591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.935 [2024-07-21 16:34:38.084053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.935 [2024-07-21 16:34:38.084100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.084111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.935 [2024-07-21 16:34:38.087081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.935 [2024-07-21 16:34:38.087130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.087142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.935 [2024-07-21 16:34:38.090745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.935 [2024-07-21 16:34:38.090795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.090806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.935 [2024-07-21 16:34:38.094315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.935 [2024-07-21 16:34:38.094372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.094384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.935 [2024-07-21 16:34:38.097064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.935 [2024-07-21 16:34:38.097113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.097124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.935 [2024-07-21 16:34:38.100942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.935 [2024-07-21 16:34:38.100990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.101001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.935 [2024-07-21 16:34:38.105188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.935 [2024-07-21 16:34:38.105237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.105248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.935 [2024-07-21 16:34:38.108045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.935 [2024-07-21 16:34:38.108093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.935 [2024-07-21 16:34:38.108105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.936 [2024-07-21 16:34:38.111597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.936 [2024-07-21 16:34:38.111645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.936 [2024-07-21 16:34:38.111657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.936 [2024-07-21 16:34:38.114984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.936 [2024-07-21 16:34:38.115033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.936 [2024-07-21 16:34:38.115045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.936 [2024-07-21 16:34:38.118399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.936 [2024-07-21 16:34:38.118433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.936 [2024-07-21 16:34:38.118445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.936 [2024-07-21 16:34:38.121291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.936 [2024-07-21 16:34:38.121321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.936 [2024-07-21 16:34:38.121332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.936 [2024-07-21 16:34:38.125636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.936 [2024-07-21 16:34:38.125699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.936 [2024-07-21 16:34:38.125711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.936 [2024-07-21 16:34:38.129440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.936 [2024-07-21 16:34:38.129474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.936 [2024-07-21 16:34:38.129485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.936 [2024-07-21 16:34:38.132105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.936 [2024-07-21 16:34:38.132154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.936 [2024-07-21 16:34:38.132165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.936 [2024-07-21 16:34:38.135781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.936 [2024-07-21 16:34:38.135830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.936 [2024-07-21 16:34:38.135841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.936 [2024-07-21 16:34:38.138927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:19.936 [2024-07-21 16:34:38.138977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.936 [2024-07-21 16:34:38.138988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.196 [2024-07-21 16:34:38.141784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.196 [2024-07-21 16:34:38.141831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.196 [2024-07-21 16:34:38.141842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.196 [2024-07-21 16:34:38.145437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.196 [2024-07-21 16:34:38.145487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.196 [2024-07-21 16:34:38.145499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.196 [2024-07-21 16:34:38.148822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.196 [2024-07-21 16:34:38.148870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.196 [2024-07-21 16:34:38.148882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.196 [2024-07-21 16:34:38.152100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.196 [2024-07-21 16:34:38.152149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.196 [2024-07-21 16:34:38.152161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.196 [2024-07-21 16:34:38.155531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.196 [2024-07-21 16:34:38.155565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.196 [2024-07-21 16:34:38.155576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.196 [2024-07-21 16:34:38.158880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.196 [2024-07-21 16:34:38.158929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.196 [2024-07-21 16:34:38.158940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.196 [2024-07-21 16:34:38.161946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.196 [2024-07-21 16:34:38.161993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.196 [2024-07-21 16:34:38.162005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.196 [2024-07-21 16:34:38.165362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.196 [2024-07-21 16:34:38.165411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.196 [2024-07-21 16:34:38.165422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.196 [2024-07-21 16:34:38.168567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.196 [2024-07-21 16:34:38.168602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.196 [2024-07-21 16:34:38.168613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.196 [2024-07-21 16:34:38.171867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.196 [2024-07-21 16:34:38.171916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.196 [2024-07-21 16:34:38.171927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.196 [2024-07-21 16:34:38.175279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.196 [2024-07-21 16:34:38.175338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.196 [2024-07-21 16:34:38.175350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.196 [2024-07-21 16:34:38.178472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.196 [2024-07-21 16:34:38.178505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.196 [2024-07-21 16:34:38.178516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.196 [2024-07-21 16:34:38.181837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.196 [2024-07-21 16:34:38.181885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.196 [2024-07-21 16:34:38.181896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.197 [2024-07-21 16:34:38.185367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.197 [2024-07-21 16:34:38.185408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.197 [2024-07-21 16:34:38.185420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.197 [2024-07-21 16:34:38.188575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.197 [2024-07-21 16:34:38.188624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.197 [2024-07-21 16:34:38.188636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.197 [2024-07-21 16:34:38.192178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.197 [2024-07-21 16:34:38.192227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.197 [2024-07-21 16:34:38.192238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.197 [2024-07-21 16:34:38.195581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.197 [2024-07-21 16:34:38.195630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.197 [2024-07-21 16:34:38.195641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.197 [2024-07-21 16:34:38.198859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.197 [2024-07-21 16:34:38.198908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.197 [2024-07-21 16:34:38.198920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.197 [2024-07-21 16:34:38.202224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.197 [2024-07-21 16:34:38.202279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.197 [2024-07-21 16:34:38.202292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.197 [2024-07-21 16:34:38.205856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.197 [2024-07-21 16:34:38.205904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.197 [2024-07-21 16:34:38.205915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.197 [2024-07-21 16:34:38.209307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.197 [2024-07-21 16:34:38.209354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.197 [2024-07-21 16:34:38.209366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.197 [2024-07-21 16:34:38.212725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.197 [2024-07-21 16:34:38.212773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.197 [2024-07-21 16:34:38.212784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.197 [2024-07-21 16:34:38.215927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.197 [2024-07-21 16:34:38.215975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.197 [2024-07-21 16:34:38.215986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.197 [2024-07-21 16:34:38.219324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.197 [2024-07-21 16:34:38.219352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.197 [2024-07-21 16:34:38.219363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.197 [2024-07-21 16:34:38.222942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.197 [2024-07-21 16:34:38.222991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.197 [2024-07-21 16:34:38.223002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.197 [2024-07-21 16:34:38.226637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.197 [2024-07-21 16:34:38.226700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.197 [2024-07-21 16:34:38.226712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.197 [2024-07-21 16:34:38.229994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.197 [2024-07-21 16:34:38.230043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.197 [2024-07-21 16:34:38.230054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.197 [2024-07-21 16:34:38.233622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.197 [2024-07-21 16:34:38.233655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.197 [2024-07-21 16:34:38.233666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.197 [2024-07-21 16:34:38.236890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.197 [2024-07-21 16:34:38.236939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.197 [2024-07-21 16:34:38.236951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.197 [2024-07-21 16:34:38.240119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.197 [2024-07-21 16:34:38.240169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.197 [2024-07-21 16:34:38.240180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.197 [2024-07-21 16:34:38.243471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.197 [2024-07-21 16:34:38.243521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.197 [2024-07-21 16:34:38.243532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.197 [2024-07-21 16:34:38.247183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.197 [2024-07-21 16:34:38.247233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.197 [2024-07-21 16:34:38.247245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.197 [2024-07-21 16:34:38.250225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.197 [2024-07-21 16:34:38.250280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.197 [2024-07-21 16:34:38.250292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.197 [2024-07-21 16:34:38.253242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.197 [2024-07-21 16:34:38.253299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.197 [2024-07-21 16:34:38.253311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.197 [2024-07-21 16:34:38.257096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.197 [2024-07-21 16:34:38.257145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.197 [2024-07-21 16:34:38.257157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.197 [2024-07-21 16:34:38.260362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.197 [2024-07-21 16:34:38.260411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.198 [2024-07-21 16:34:38.260422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.198 [2024-07-21 16:34:38.263460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.198 [2024-07-21 16:34:38.263494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.198 [2024-07-21 16:34:38.263505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.198 [2024-07-21 16:34:38.266437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.198 [2024-07-21 16:34:38.266471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.198 [2024-07-21 16:34:38.266482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.198 [2024-07-21 16:34:38.270130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.198 [2024-07-21 16:34:38.270177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.198 [2024-07-21 16:34:38.270189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.198 [2024-07-21 16:34:38.274194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.198 [2024-07-21 16:34:38.274244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.198 [2024-07-21 16:34:38.274256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.198 [2024-07-21 16:34:38.277191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.198 [2024-07-21 16:34:38.277239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.198 [2024-07-21 16:34:38.277250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.198 [2024-07-21 16:34:38.280533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.198 [2024-07-21 16:34:38.280568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.198 [2024-07-21 16:34:38.280580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.198 [2024-07-21 16:34:38.284023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.198 [2024-07-21 16:34:38.284072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.198 [2024-07-21 16:34:38.284084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.198 [2024-07-21 16:34:38.287767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.198 [2024-07-21 16:34:38.287816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.198 [2024-07-21 16:34:38.287827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.198 [2024-07-21 16:34:38.291238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.198 [2024-07-21 16:34:38.291298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.198 [2024-07-21 16:34:38.291310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.198 [2024-07-21 16:34:38.294283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.198 [2024-07-21 16:34:38.294358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.198 [2024-07-21 16:34:38.294370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.198 [2024-07-21 16:34:38.298112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.198 [2024-07-21 16:34:38.298161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.198 [2024-07-21 16:34:38.298172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.198 [2024-07-21 16:34:38.301311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.198 [2024-07-21 16:34:38.301340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.198 [2024-07-21 16:34:38.301351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.198 [2024-07-21 16:34:38.304380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.198 [2024-07-21 16:34:38.304429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.198 [2024-07-21 16:34:38.304440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.198 [2024-07-21 16:34:38.307651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.198 [2024-07-21 16:34:38.307699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.198 [2024-07-21 16:34:38.307710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.198 [2024-07-21 16:34:38.311233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.198 [2024-07-21 16:34:38.311294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.198 [2024-07-21 16:34:38.311307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.198 [2024-07-21 16:34:38.314395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.198 [2024-07-21 16:34:38.314430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.198 [2024-07-21 16:34:38.314442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.198 [2024-07-21 16:34:38.317842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.198 [2024-07-21 16:34:38.317892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.198 [2024-07-21 16:34:38.317904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.198 [2024-07-21 16:34:38.321174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.198 [2024-07-21 16:34:38.321223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.198 [2024-07-21 16:34:38.321234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.198 [2024-07-21 16:34:38.324506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.198 [2024-07-21 16:34:38.324555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.198 [2024-07-21 16:34:38.324566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.198 [2024-07-21 16:34:38.328676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.198 [2024-07-21 16:34:38.328725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.198 [2024-07-21 16:34:38.328736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.198 [2024-07-21 16:34:38.331615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.198 [2024-07-21 16:34:38.331648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.198 [2024-07-21 16:34:38.331659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.198 [2024-07-21 16:34:38.335617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.198 [2024-07-21 16:34:38.335651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.198 [2024-07-21 16:34:38.335663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.198 [2024-07-21 16:34:38.339457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.198 [2024-07-21 16:34:38.339491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.198 [2024-07-21 16:34:38.339502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.198 [2024-07-21 16:34:38.342999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.198 [2024-07-21 16:34:38.343048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.198 [2024-07-21 16:34:38.343059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.198 [2024-07-21 16:34:38.345499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.198 [2024-07-21 16:34:38.345545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.198 [2024-07-21 16:34:38.345556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.198 [2024-07-21 16:34:38.349907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.198 [2024-07-21 16:34:38.349956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.198 [2024-07-21 16:34:38.349968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.198 [2024-07-21 16:34:38.352746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.198 [2024-07-21 16:34:38.352796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.198 [2024-07-21 16:34:38.352809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.198 [2024-07-21 16:34:38.356306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.198 [2024-07-21 16:34:38.356335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.198 [2024-07-21 16:34:38.356347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.198 [2024-07-21 16:34:38.360625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.198 [2024-07-21 16:34:38.360659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.199 [2024-07-21 16:34:38.360671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.199 [2024-07-21 16:34:38.363780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.199 [2024-07-21 16:34:38.363829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.199 [2024-07-21 16:34:38.363841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.199 [2024-07-21 16:34:38.367646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.199 [2024-07-21 16:34:38.367695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.199 [2024-07-21 16:34:38.367707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.199 [2024-07-21 16:34:38.371601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.199 [2024-07-21 16:34:38.371650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.199 [2024-07-21 16:34:38.371661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.199 [2024-07-21 16:34:38.374834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.199 [2024-07-21 16:34:38.374883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.199 [2024-07-21 16:34:38.374894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.199 [2024-07-21 16:34:38.377928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.199 [2024-07-21 16:34:38.377976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.199 [2024-07-21 16:34:38.377987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.199 [2024-07-21 16:34:38.381480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.199 [2024-07-21 16:34:38.381514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.199 [2024-07-21 16:34:38.381525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.199 [2024-07-21 16:34:38.384758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.199 [2024-07-21 16:34:38.384806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.199 [2024-07-21 16:34:38.384818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.199 [2024-07-21 16:34:38.388309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.199 [2024-07-21 16:34:38.388357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.199 [2024-07-21 16:34:38.388369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.199 [2024-07-21 16:34:38.391951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.199 [2024-07-21 16:34:38.392000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.199 [2024-07-21 16:34:38.392011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.199 [2024-07-21 16:34:38.394663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.199 [2024-07-21 16:34:38.394728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.199 [2024-07-21 16:34:38.394739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.199 [2024-07-21 16:34:38.398314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.199 [2024-07-21 16:34:38.398350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.199 [2024-07-21 16:34:38.398361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.199 [2024-07-21 16:34:38.401322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.199 [2024-07-21 16:34:38.401371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.199 [2024-07-21 16:34:38.401382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.459 [2024-07-21 16:34:38.404668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.459 [2024-07-21 16:34:38.404716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.459 [2024-07-21 16:34:38.404727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.459 [2024-07-21 16:34:38.407733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.459 [2024-07-21 16:34:38.407781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.459 [2024-07-21 16:34:38.407793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.459 [2024-07-21 16:34:38.411021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.459 [2024-07-21 16:34:38.411070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.459 [2024-07-21 16:34:38.411081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.459 [2024-07-21 16:34:38.414592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.459 [2024-07-21 16:34:38.414627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.459 [2024-07-21 16:34:38.414638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.459 [2024-07-21 16:34:38.417893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.459 [2024-07-21 16:34:38.417941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.459 [2024-07-21 16:34:38.417953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.459 [2024-07-21 16:34:38.421536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.459 [2024-07-21 16:34:38.421584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.459 [2024-07-21 16:34:38.421595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.459 [2024-07-21 16:34:38.424516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.459 [2024-07-21 16:34:38.424565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.459 [2024-07-21 16:34:38.424576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.459 [2024-07-21 16:34:38.427773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.459 [2024-07-21 16:34:38.427821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.459 [2024-07-21 16:34:38.427832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.459 [2024-07-21 16:34:38.431060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.459 [2024-07-21 16:34:38.431108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.459 [2024-07-21 16:34:38.431120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.459 [2024-07-21 16:34:38.434006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.459 [2024-07-21 16:34:38.434054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.459 [2024-07-21 16:34:38.434066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.459 [2024-07-21 16:34:38.437568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.459 [2024-07-21 16:34:38.437616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.459 [2024-07-21 16:34:38.437628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.459 [2024-07-21 16:34:38.440374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.459 [2024-07-21 16:34:38.440421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.459 [2024-07-21 16:34:38.440433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.459 [2024-07-21 16:34:38.443427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.459 [2024-07-21 16:34:38.443461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.459 [2024-07-21 16:34:38.443472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.459 [2024-07-21 16:34:38.446937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.459 [2024-07-21 16:34:38.446986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.459 [2024-07-21 16:34:38.446998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.459 [2024-07-21 16:34:38.449924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.459 [2024-07-21 16:34:38.449973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.459 [2024-07-21 16:34:38.449984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.459 [2024-07-21 16:34:38.453648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.459 [2024-07-21 16:34:38.453697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.459 [2024-07-21 16:34:38.453708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.459 [2024-07-21 16:34:38.457165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.459 [2024-07-21 16:34:38.457215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.459 [2024-07-21 16:34:38.457226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.459 [2024-07-21 16:34:38.460214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.459 [2024-07-21 16:34:38.460273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.459 [2024-07-21 16:34:38.460286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.459 [2024-07-21 16:34:38.463512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.459 [2024-07-21 16:34:38.463546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.459 [2024-07-21 16:34:38.463558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.459 [2024-07-21 16:34:38.467138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.459 [2024-07-21 16:34:38.467188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.460 [2024-07-21 16:34:38.467199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.460 [2024-07-21 16:34:38.470315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.460 [2024-07-21 16:34:38.470369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.460 [2024-07-21 16:34:38.470381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.460 [2024-07-21 16:34:38.473208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.460 [2024-07-21 16:34:38.473256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.460 [2024-07-21 16:34:38.473277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.460 [2024-07-21 16:34:38.476703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.460 [2024-07-21 16:34:38.476752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.460 [2024-07-21 16:34:38.476764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.460 [2024-07-21 16:34:38.479770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.460 [2024-07-21 16:34:38.479819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.460 [2024-07-21 16:34:38.479830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.460 [2024-07-21 16:34:38.483391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.460 [2024-07-21 16:34:38.483440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.460 [2024-07-21 16:34:38.483451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.460 [2024-07-21 16:34:38.487019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.460 [2024-07-21 16:34:38.487068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.460 [2024-07-21 16:34:38.487080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.460 [2024-07-21 16:34:38.489424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.460 [2024-07-21 16:34:38.489471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.460 [2024-07-21 16:34:38.489483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.460 [2024-07-21 16:34:38.493149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.460 [2024-07-21 16:34:38.493204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.460 [2024-07-21 16:34:38.493216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.460 [2024-07-21 16:34:38.497385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.460 [2024-07-21 16:34:38.497433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.460 [2024-07-21 16:34:38.497444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.460 [2024-07-21 16:34:38.500115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.460 [2024-07-21 16:34:38.500163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.460 [2024-07-21 16:34:38.500174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.460 [2024-07-21 16:34:38.503318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.460 [2024-07-21 16:34:38.503346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.460 [2024-07-21 16:34:38.503357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.460 [2024-07-21 16:34:38.507435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.460 [2024-07-21 16:34:38.507469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.460 [2024-07-21 16:34:38.507480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.460 [2024-07-21 16:34:38.510126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.460 [2024-07-21 16:34:38.510174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.460 [2024-07-21 16:34:38.510185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.460 [2024-07-21 16:34:38.513675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.460 [2024-07-21 16:34:38.513724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.460 [2024-07-21 16:34:38.513735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.460 [2024-07-21 16:34:38.517715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.460 [2024-07-21 16:34:38.517764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.460 [2024-07-21 16:34:38.517776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.460 [2024-07-21 16:34:38.521332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.460 [2024-07-21 16:34:38.521378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.460 [2024-07-21 16:34:38.521389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.460 [2024-07-21 16:34:38.524164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.460 [2024-07-21 16:34:38.524216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.460 [2024-07-21 16:34:38.524227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.460 [2024-07-21 16:34:38.527419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.460 [2024-07-21 16:34:38.527467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.460 [2024-07-21 16:34:38.527487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.460 [2024-07-21 16:34:38.530744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.460 [2024-07-21 16:34:38.530793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.460 [2024-07-21 16:34:38.530804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.460 [2024-07-21 16:34:38.534805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.460 [2024-07-21 16:34:38.534854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.460 [2024-07-21 16:34:38.534866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.460 [2024-07-21 16:34:38.538025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.460 [2024-07-21 16:34:38.538071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.460 [2024-07-21 16:34:38.538082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.460 [2024-07-21 16:34:38.541235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.460 [2024-07-21 16:34:38.541298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.460 [2024-07-21 16:34:38.541311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.460 [2024-07-21 16:34:38.544313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.460 [2024-07-21 16:34:38.544364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.460 [2024-07-21 16:34:38.544376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.460 [2024-07-21 16:34:38.548008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.460 [2024-07-21 16:34:38.548061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.460 [2024-07-21 16:34:38.548072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.460 [2024-07-21 16:34:38.551531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.460 [2024-07-21 16:34:38.551580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.460 [2024-07-21 16:34:38.551592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.460 [2024-07-21 16:34:38.554788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.460 [2024-07-21 16:34:38.554837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.460 [2024-07-21 16:34:38.554848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.460 [2024-07-21 16:34:38.557335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.460 [2024-07-21 16:34:38.557382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.460 [2024-07-21 16:34:38.557393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.460 [2024-07-21 16:34:38.561129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.460 [2024-07-21 16:34:38.561184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.460 [2024-07-21 16:34:38.561195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.460 [2024-07-21 16:34:38.565600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.460 [2024-07-21 16:34:38.565650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.461 [2024-07-21 16:34:38.565661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.461 [2024-07-21 16:34:38.568731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.461 [2024-07-21 16:34:38.568779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.461 [2024-07-21 16:34:38.568790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.461 [2024-07-21 16:34:38.572296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.461 [2024-07-21 16:34:38.572344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.461 [2024-07-21 16:34:38.572356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.461 [2024-07-21 16:34:38.576222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.461 [2024-07-21 16:34:38.576279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.461 [2024-07-21 16:34:38.576292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.461 [2024-07-21 16:34:38.578859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.461 [2024-07-21 16:34:38.578907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.461 [2024-07-21 16:34:38.578918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.461 [2024-07-21 16:34:38.582892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.461 [2024-07-21 16:34:38.582940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.461 [2024-07-21 16:34:38.582952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.461 [2024-07-21 16:34:38.586832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.461 [2024-07-21 16:34:38.586880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.461 [2024-07-21 16:34:38.586892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.461 [2024-07-21 16:34:38.589606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.461 [2024-07-21 16:34:38.589653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.461 [2024-07-21 16:34:38.589665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.461 [2024-07-21 16:34:38.593305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.461 [2024-07-21 16:34:38.593353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.461 [2024-07-21 16:34:38.593364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.461 [2024-07-21 16:34:38.597299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.461 [2024-07-21 16:34:38.597346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.461 [2024-07-21 16:34:38.597357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.461 [2024-07-21 16:34:38.599939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.461 [2024-07-21 16:34:38.599987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.461 [2024-07-21 16:34:38.600007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.461 [2024-07-21 16:34:38.603265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.461 [2024-07-21 16:34:38.603321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.461 [2024-07-21 16:34:38.603333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.461 [2024-07-21 16:34:38.606043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.461 [2024-07-21 16:34:38.606090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.461 [2024-07-21 16:34:38.606101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.461 [2024-07-21 16:34:38.609379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.461 [2024-07-21 16:34:38.609413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.461 [2024-07-21 16:34:38.609434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.461 [2024-07-21 16:34:38.613119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.461 [2024-07-21 16:34:38.613180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.461 [2024-07-21 16:34:38.613200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.461 [2024-07-21 16:34:38.617284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.461 [2024-07-21 16:34:38.617332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.461 [2024-07-21 16:34:38.617351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.461 [2024-07-21 16:34:38.620119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.461 [2024-07-21 16:34:38.620167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.461 [2024-07-21 16:34:38.620178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.461 [2024-07-21 16:34:38.623630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.461 [2024-07-21 16:34:38.623663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.461 [2024-07-21 16:34:38.623676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.461 [2024-07-21 16:34:38.627556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.461 [2024-07-21 16:34:38.627591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.461 [2024-07-21 16:34:38.627602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.461 [2024-07-21 16:34:38.631258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.461 [2024-07-21 16:34:38.631317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.461 [2024-07-21 16:34:38.631329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.461 [2024-07-21 16:34:38.633852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.461 [2024-07-21 16:34:38.633900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.461 [2024-07-21 16:34:38.633911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.461 [2024-07-21 16:34:38.637171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.461 [2024-07-21 16:34:38.637221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.461 [2024-07-21 16:34:38.637241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.461 [2024-07-21 16:34:38.641357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.461 [2024-07-21 16:34:38.641405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.461 [2024-07-21 16:34:38.641417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.461 [2024-07-21 16:34:38.645115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.461 [2024-07-21 16:34:38.645165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.461 [2024-07-21 16:34:38.645176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.461 [2024-07-21 16:34:38.647629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.461 [2024-07-21 16:34:38.647662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.461 [2024-07-21 16:34:38.647673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.461 [2024-07-21 16:34:38.651793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.461 [2024-07-21 16:34:38.651843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.461 [2024-07-21 16:34:38.651854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.461 [2024-07-21 16:34:38.654636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.461 [2024-07-21 16:34:38.654667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.461 [2024-07-21 16:34:38.654678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.461 [2024-07-21 16:34:38.657835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.461 [2024-07-21 16:34:38.657865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.461 [2024-07-21 16:34:38.657875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.461 [2024-07-21 16:34:38.662086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.461 [2024-07-21 16:34:38.662118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.461 [2024-07-21 16:34:38.662130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.721 [2024-07-21 16:34:38.665767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.721 [2024-07-21 16:34:38.665797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.721 [2024-07-21 16:34:38.665809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.721 [2024-07-21 16:34:38.669130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.721 [2024-07-21 16:34:38.669188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.721 [2024-07-21 16:34:38.669199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.721 [2024-07-21 16:34:38.672483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.721 [2024-07-21 16:34:38.672517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.721 [2024-07-21 16:34:38.672529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.721 [2024-07-21 16:34:38.676102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.721 [2024-07-21 16:34:38.676146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.721 [2024-07-21 16:34:38.676157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.721 [2024-07-21 16:34:38.679956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.721 [2024-07-21 16:34:38.680005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.721 [2024-07-21 16:34:38.680016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.721 [2024-07-21 16:34:38.682570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.721 [2024-07-21 16:34:38.682604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.721 [2024-07-21 16:34:38.682616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.721 [2024-07-21 16:34:38.686236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.721 [2024-07-21 16:34:38.686294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.721 [2024-07-21 16:34:38.686306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.721 [2024-07-21 16:34:38.689875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.721 [2024-07-21 16:34:38.689924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.721 [2024-07-21 16:34:38.689935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.721 [2024-07-21 16:34:38.693438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.721 [2024-07-21 16:34:38.693495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.721 [2024-07-21 16:34:38.693506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.721 [2024-07-21 16:34:38.696433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.721 [2024-07-21 16:34:38.696467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.721 [2024-07-21 16:34:38.696487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.721 [2024-07-21 16:34:38.699982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.722 [2024-07-21 16:34:38.700031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.722 [2024-07-21 16:34:38.700043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.722 [2024-07-21 16:34:38.703777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.722 [2024-07-21 16:34:38.703826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.722 [2024-07-21 16:34:38.703837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.722 [2024-07-21 16:34:38.707181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.722 [2024-07-21 16:34:38.707230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.722 [2024-07-21 16:34:38.707241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.722 [2024-07-21 16:34:38.710213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.722 [2024-07-21 16:34:38.710273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.722 [2024-07-21 16:34:38.710286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.722 [2024-07-21 16:34:38.713688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.722 [2024-07-21 16:34:38.713736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.722 [2024-07-21 16:34:38.713748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.722 [2024-07-21 16:34:38.716702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.722 [2024-07-21 16:34:38.716751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.722 [2024-07-21 16:34:38.716762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.722 [2024-07-21 16:34:38.719879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.722 [2024-07-21 16:34:38.719928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.722 [2024-07-21 16:34:38.719939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.722 [2024-07-21 16:34:38.723182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.722 [2024-07-21 16:34:38.723230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.722 [2024-07-21 16:34:38.723242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.722 [2024-07-21 16:34:38.726837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.722 [2024-07-21 16:34:38.726886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.722 [2024-07-21 16:34:38.726897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.722 [2024-07-21 16:34:38.730559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.722 [2024-07-21 16:34:38.730594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.722 [2024-07-21 16:34:38.730605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.722 [2024-07-21 16:34:38.733555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.722 [2024-07-21 16:34:38.733602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.722 [2024-07-21 16:34:38.733614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.722 [2024-07-21 16:34:38.737085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.722 [2024-07-21 16:34:38.737134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.722 [2024-07-21 16:34:38.737145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.722 [2024-07-21 16:34:38.741233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.722 [2024-07-21 16:34:38.741289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.722 [2024-07-21 16:34:38.741303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.722 [2024-07-21 16:34:38.744903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.722 [2024-07-21 16:34:38.744952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.722 [2024-07-21 16:34:38.744963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.722 [2024-07-21 16:34:38.748034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.722 [2024-07-21 16:34:38.748082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.722 [2024-07-21 16:34:38.748094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.722 [2024-07-21 16:34:38.752152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.722 [2024-07-21 16:34:38.752201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.722 [2024-07-21 16:34:38.752213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.722 [2024-07-21 16:34:38.756370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.722 [2024-07-21 16:34:38.756419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.722 [2024-07-21 16:34:38.756430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.722 [2024-07-21 16:34:38.760187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.722 [2024-07-21 16:34:38.760236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.722 [2024-07-21 16:34:38.760247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.722 [2024-07-21 16:34:38.762887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.722 [2024-07-21 16:34:38.762936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.722 [2024-07-21 16:34:38.762948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.722 [2024-07-21 16:34:38.766434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.722 [2024-07-21 16:34:38.766480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.722 [2024-07-21 16:34:38.766492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.722 [2024-07-21 16:34:38.770139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.722 [2024-07-21 16:34:38.770188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.722 [2024-07-21 16:34:38.770200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.722 [2024-07-21 16:34:38.773084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.722 [2024-07-21 16:34:38.773131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.722 [2024-07-21 16:34:38.773143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.722 [2024-07-21 16:34:38.776826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.722 [2024-07-21 16:34:38.776874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.722 [2024-07-21 16:34:38.776887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.722 [2024-07-21 16:34:38.780340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.722 [2024-07-21 16:34:38.780389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.722 [2024-07-21 16:34:38.780400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.722 [2024-07-21 16:34:38.783821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.722 [2024-07-21 16:34:38.783869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.722 [2024-07-21 16:34:38.783880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.722 [2024-07-21 16:34:38.787230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.722 [2024-07-21 16:34:38.787286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.722 [2024-07-21 16:34:38.787299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.722 [2024-07-21 16:34:38.791036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.722 [2024-07-21 16:34:38.791085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.722 [2024-07-21 16:34:38.791096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.722 [2024-07-21 16:34:38.794405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.722 [2024-07-21 16:34:38.794440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.722 [2024-07-21 16:34:38.794452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.722 [2024-07-21 16:34:38.798021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.722 [2024-07-21 16:34:38.798069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.722 [2024-07-21 16:34:38.798080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.722 [2024-07-21 16:34:38.801256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.801325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.723 [2024-07-21 16:34:38.801337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.723 [2024-07-21 16:34:38.804600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.804635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.723 [2024-07-21 16:34:38.804646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.723 [2024-07-21 16:34:38.808341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.808371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.723 [2024-07-21 16:34:38.808382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.723 [2024-07-21 16:34:38.811460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.811509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.723 [2024-07-21 16:34:38.811520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.723 [2024-07-21 16:34:38.815376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.815425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.723 [2024-07-21 16:34:38.815437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.723 [2024-07-21 16:34:38.819026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.819075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.723 [2024-07-21 16:34:38.819087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.723 [2024-07-21 16:34:38.822622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.822664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.723 [2024-07-21 16:34:38.822677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.723 [2024-07-21 16:34:38.825830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.825878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.723 [2024-07-21 16:34:38.825889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.723 [2024-07-21 16:34:38.828667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.828716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.723 [2024-07-21 16:34:38.828727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.723 [2024-07-21 16:34:38.832270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.832333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.723 [2024-07-21 16:34:38.832345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.723 [2024-07-21 16:34:38.836612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.836646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.723 [2024-07-21 16:34:38.836661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.723 [2024-07-21 16:34:38.840578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.840612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.723 [2024-07-21 16:34:38.840629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.723 [2024-07-21 16:34:38.843739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.843790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.723 [2024-07-21 16:34:38.843802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.723 [2024-07-21 16:34:38.847612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.847680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.723 [2024-07-21 16:34:38.847700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.723 [2024-07-21 16:34:38.851191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.851240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.723 [2024-07-21 16:34:38.851252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.723 [2024-07-21 16:34:38.854689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.854738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.723 [2024-07-21 16:34:38.854749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.723 [2024-07-21 16:34:38.858210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.858258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.723 [2024-07-21 16:34:38.858297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.723 [2024-07-21 16:34:38.862089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.862137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.723 [2024-07-21 16:34:38.862148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.723 [2024-07-21 16:34:38.864958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.865007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.723 [2024-07-21 16:34:38.865018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.723 [2024-07-21 16:34:38.868838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.868888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.723 [2024-07-21 16:34:38.868899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.723 [2024-07-21 16:34:38.872009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.872059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.723 [2024-07-21 16:34:38.872070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.723 [2024-07-21 16:34:38.875556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.875589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.723 [2024-07-21 16:34:38.875600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.723 [2024-07-21 16:34:38.878994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.879042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.723 [2024-07-21 16:34:38.879054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.723 [2024-07-21 16:34:38.882571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.882605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.723 [2024-07-21 16:34:38.882616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.723 [2024-07-21 16:34:38.885758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.885806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.723 [2024-07-21 16:34:38.885817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.723 [2024-07-21 16:34:38.888775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.888823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.723 [2024-07-21 16:34:38.888834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.723 [2024-07-21 16:34:38.892387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.892421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.723 [2024-07-21 16:34:38.892433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.723 [2024-07-21 16:34:38.895088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.895136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.723 [2024-07-21 16:34:38.895148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.723 [2024-07-21 16:34:38.898812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.898861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.723 [2024-07-21 16:34:38.898872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.723 [2024-07-21 16:34:38.902216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.723 [2024-07-21 16:34:38.902273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.724 [2024-07-21 16:34:38.902286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.724 [2024-07-21 16:34:38.905557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.724 [2024-07-21 16:34:38.905591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.724 [2024-07-21 16:34:38.905602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.724 [2024-07-21 16:34:38.909149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.724 [2024-07-21 16:34:38.909217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.724 [2024-07-21 16:34:38.909231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.724 [2024-07-21 16:34:38.912726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.724 [2024-07-21 16:34:38.912775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.724 [2024-07-21 16:34:38.912787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.724 [2024-07-21 16:34:38.915692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.724 [2024-07-21 16:34:38.915741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.724 [2024-07-21 16:34:38.915753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.724 [2024-07-21 16:34:38.919116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.724 [2024-07-21 16:34:38.919164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.724 [2024-07-21 16:34:38.919176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.724 [2024-07-21 16:34:38.922736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.724 [2024-07-21 16:34:38.922787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.724 [2024-07-21 16:34:38.922798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.724 [2024-07-21 16:34:38.926235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.724 [2024-07-21 16:34:38.926293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.724 [2024-07-21 16:34:38.926306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.983 [2024-07-21 16:34:38.929943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.984 [2024-07-21 16:34:38.929992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.984 [2024-07-21 16:34:38.930004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.984 [2024-07-21 16:34:38.933696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.984 [2024-07-21 16:34:38.933745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.984 [2024-07-21 16:34:38.933757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.984 [2024-07-21 16:34:38.936797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.984 [2024-07-21 16:34:38.936845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.984 [2024-07-21 16:34:38.936857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.984 [2024-07-21 16:34:38.939883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.984 [2024-07-21 16:34:38.939933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.984 [2024-07-21 16:34:38.939944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.984 [2024-07-21 16:34:38.943323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.984 [2024-07-21 16:34:38.943370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.984 [2024-07-21 16:34:38.943382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.984 [2024-07-21 16:34:38.947149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.984 [2024-07-21 16:34:38.947200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.984 [2024-07-21 16:34:38.947212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.984 [2024-07-21 16:34:38.950175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.984 [2024-07-21 16:34:38.950224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.984 [2024-07-21 16:34:38.950235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.984 [2024-07-21 16:34:38.953839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.984 [2024-07-21 16:34:38.953888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.984 [2024-07-21 16:34:38.953900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.984 [2024-07-21 16:34:38.957589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.984 [2024-07-21 16:34:38.957624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.984 [2024-07-21 16:34:38.957635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.984 [2024-07-21 16:34:38.960937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.984 [2024-07-21 16:34:38.960986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.984 [2024-07-21 16:34:38.960996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.984 [2024-07-21 16:34:38.964311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.984 [2024-07-21 16:34:38.964360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.984 [2024-07-21 16:34:38.964371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.984 [2024-07-21 16:34:38.967722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.984 [2024-07-21 16:34:38.967772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.984 [2024-07-21 16:34:38.967783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.984 [2024-07-21 16:34:38.971094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.984 [2024-07-21 16:34:38.971142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.984 [2024-07-21 16:34:38.971154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.984 [2024-07-21 16:34:38.974844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.984 [2024-07-21 16:34:38.974894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.984 [2024-07-21 16:34:38.974905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.984 [2024-07-21 16:34:38.977391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.984 [2024-07-21 16:34:38.977440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.984 [2024-07-21 16:34:38.977451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.984 [2024-07-21 16:34:38.981127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.984 [2024-07-21 16:34:38.981177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.984 [2024-07-21 16:34:38.981189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.984 [2024-07-21 16:34:38.984393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.984 [2024-07-21 16:34:38.984443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.984 [2024-07-21 16:34:38.984454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.984 [2024-07-21 16:34:38.987866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.984 [2024-07-21 16:34:38.987915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.984 [2024-07-21 16:34:38.987926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.984 [2024-07-21 16:34:38.991056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.984 [2024-07-21 16:34:38.991104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.984 [2024-07-21 16:34:38.991124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.984 [2024-07-21 16:34:38.994003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.984 [2024-07-21 16:34:38.994036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.984 [2024-07-21 16:34:38.994055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.984 [2024-07-21 16:34:38.997025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.984 [2024-07-21 16:34:38.997058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.984 [2024-07-21 16:34:38.997070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.984 [2024-07-21 16:34:39.000211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.984 [2024-07-21 16:34:39.000245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.984 [2024-07-21 16:34:39.000256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.984 [2024-07-21 16:34:39.004322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.984 [2024-07-21 16:34:39.004354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.984 [2024-07-21 16:34:39.004365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.984 [2024-07-21 16:34:39.006927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.984 [2024-07-21 16:34:39.006959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.984 [2024-07-21 16:34:39.006970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.984 [2024-07-21 16:34:39.010520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.984 [2024-07-21 16:34:39.010554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.984 [2024-07-21 16:34:39.010566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.984 [2024-07-21 16:34:39.014102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.984 [2024-07-21 16:34:39.014135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.984 [2024-07-21 16:34:39.014147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.984 [2024-07-21 16:34:39.017358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.984 [2024-07-21 16:34:39.017390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.984 [2024-07-21 16:34:39.017401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.985 [2024-07-21 16:34:39.020800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.985 [2024-07-21 16:34:39.020832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.985 [2024-07-21 16:34:39.020844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.985 [2024-07-21 16:34:39.024358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.985 [2024-07-21 16:34:39.024391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.985 [2024-07-21 16:34:39.024403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.985 [2024-07-21 16:34:39.027487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.985 [2024-07-21 16:34:39.027520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.985 [2024-07-21 16:34:39.027533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.985 [2024-07-21 16:34:39.030610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.985 [2024-07-21 16:34:39.030645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.985 [2024-07-21 16:34:39.030673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.985 [2024-07-21 16:34:39.034033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.985 [2024-07-21 16:34:39.034064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.985 [2024-07-21 16:34:39.034075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.985 [2024-07-21 16:34:39.037914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.985 [2024-07-21 16:34:39.037946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.985 [2024-07-21 16:34:39.037958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.985 [2024-07-21 16:34:39.041150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.985 [2024-07-21 16:34:39.041183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.985 [2024-07-21 16:34:39.041201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.985 [2024-07-21 16:34:39.044708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.985 [2024-07-21 16:34:39.044740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.985 [2024-07-21 16:34:39.044752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.985 [2024-07-21 16:34:39.047788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.985 [2024-07-21 16:34:39.047822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.985 [2024-07-21 16:34:39.047833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.985 [2024-07-21 16:34:39.051057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.985 [2024-07-21 16:34:39.051090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.985 [2024-07-21 16:34:39.051101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.985 [2024-07-21 16:34:39.054731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.985 [2024-07-21 16:34:39.054765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.985 [2024-07-21 16:34:39.054777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.985 [2024-07-21 16:34:39.058191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.985 [2024-07-21 16:34:39.058223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.985 [2024-07-21 16:34:39.058234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.985 [2024-07-21 16:34:39.061820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.985 [2024-07-21 16:34:39.061851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.985 [2024-07-21 16:34:39.061879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.985 [2024-07-21 16:34:39.064752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.985 [2024-07-21 16:34:39.064786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.985 [2024-07-21 16:34:39.064798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.985 [2024-07-21 16:34:39.068839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.985 [2024-07-21 16:34:39.068873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.985 [2024-07-21 16:34:39.068886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.985 [2024-07-21 16:34:39.073394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.985 [2024-07-21 16:34:39.073427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.985 [2024-07-21 16:34:39.073439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.985 [2024-07-21 16:34:39.077522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.985 [2024-07-21 16:34:39.077557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.985 [2024-07-21 16:34:39.077570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.985 [2024-07-21 16:34:39.080309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.985 [2024-07-21 16:34:39.080365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.985 [2024-07-21 16:34:39.080377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.985 [2024-07-21 16:34:39.084232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.985 [2024-07-21 16:34:39.084290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.985 [2024-07-21 16:34:39.084303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.985 [2024-07-21 16:34:39.087989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.985 [2024-07-21 16:34:39.088022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.985 [2024-07-21 16:34:39.088034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.985 [2024-07-21 16:34:39.091872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.985 [2024-07-21 16:34:39.091906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.985 [2024-07-21 16:34:39.091918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.985 [2024-07-21 16:34:39.095001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.985 [2024-07-21 16:34:39.095034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.985 [2024-07-21 16:34:39.095046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.985 [2024-07-21 16:34:39.099412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.985 [2024-07-21 16:34:39.099446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.985 [2024-07-21 16:34:39.099458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.985 [2024-07-21 16:34:39.103573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.985 [2024-07-21 16:34:39.103607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.985 [2024-07-21 16:34:39.103619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.985 [2024-07-21 16:34:39.106542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.985 [2024-07-21 16:34:39.106576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.985 [2024-07-21 16:34:39.106589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.985 [2024-07-21 16:34:39.109883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.985 [2024-07-21 16:34:39.109917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.985 [2024-07-21 16:34:39.109929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.985 [2024-07-21 16:34:39.113924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.985 [2024-07-21 16:34:39.114123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.985 [2024-07-21 16:34:39.114220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.985 [2024-07-21 16:34:39.118300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.985 [2024-07-21 16:34:39.118340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.985 [2024-07-21 16:34:39.118369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.985 [2024-07-21 16:34:39.121060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.985 [2024-07-21 16:34:39.121093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.985 [2024-07-21 16:34:39.121105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.986 [2024-07-21 16:34:39.124573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.986 [2024-07-21 16:34:39.124606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.986 [2024-07-21 16:34:39.124617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.986 [2024-07-21 16:34:39.128051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.986 [2024-07-21 16:34:39.128085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.986 [2024-07-21 16:34:39.128097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.986 [2024-07-21 16:34:39.131238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.986 [2024-07-21 16:34:39.131295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.986 [2024-07-21 16:34:39.131307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.986 [2024-07-21 16:34:39.134788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.986 [2024-07-21 16:34:39.134821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.986 [2024-07-21 16:34:39.134832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.986 [2024-07-21 16:34:39.138706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.986 [2024-07-21 16:34:39.138739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.986 [2024-07-21 16:34:39.138751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.986 [2024-07-21 16:34:39.141477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.986 [2024-07-21 16:34:39.141510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.986 [2024-07-21 16:34:39.141530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.986 [2024-07-21 16:34:39.145041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.986 [2024-07-21 16:34:39.145075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.986 [2024-07-21 16:34:39.145086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.986 [2024-07-21 16:34:39.149126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.986 [2024-07-21 16:34:39.149158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.986 [2024-07-21 16:34:39.149170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.986 [2024-07-21 16:34:39.152159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.986 [2024-07-21 16:34:39.152191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.986 [2024-07-21 16:34:39.152202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.986 [2024-07-21 16:34:39.156030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.986 [2024-07-21 16:34:39.156063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.986 [2024-07-21 16:34:39.156075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.986 [2024-07-21 16:34:39.158797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.986 [2024-07-21 16:34:39.158830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.986 [2024-07-21 16:34:39.158841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.986 [2024-07-21 16:34:39.161986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.986 [2024-07-21 16:34:39.162017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.986 [2024-07-21 16:34:39.162028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.986 [2024-07-21 16:34:39.165523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.986 [2024-07-21 16:34:39.165557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.986 [2024-07-21 16:34:39.165569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.986 [2024-07-21 16:34:39.169123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.986 [2024-07-21 16:34:39.169163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.986 [2024-07-21 16:34:39.169174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.986 [2024-07-21 16:34:39.171896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.986 [2024-07-21 16:34:39.171930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.986 [2024-07-21 16:34:39.171941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:20.986 [2024-07-21 16:34:39.175893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.986 [2024-07-21 16:34:39.176095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.986 [2024-07-21 16:34:39.176201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:20.986 [2024-07-21 16:34:39.180637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.986 [2024-07-21 16:34:39.180689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.986 [2024-07-21 16:34:39.180701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:20.986 [2024-07-21 16:34:39.183638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.986 [2024-07-21 16:34:39.183671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.986 [2024-07-21 16:34:39.183683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:20.986 [2024-07-21 16:34:39.186601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:20.986 [2024-07-21 16:34:39.186635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:20.986 [2024-07-21 16:34:39.186647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:21.245 [2024-07-21 16:34:39.189658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:21.245 [2024-07-21 16:34:39.189699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.245 [2024-07-21 16:34:39.189710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:21.245 [2024-07-21 16:34:39.193058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:21.245 [2024-07-21 16:34:39.193092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.245 [2024-07-21 16:34:39.193104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.245 [2024-07-21 16:34:39.196254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ee9010) 00:19:21.245 [2024-07-21 16:34:39.196313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.245 [2024-07-21 16:34:39.196325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:21.245 00:19:21.245 Latency(us) 00:19:21.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.245 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:21.245 nvme0n1 : 2.00 9065.52 1133.19 0.00 0.00 1761.39 614.40 4796.04 00:19:21.245 =================================================================================================================== 00:19:21.245 Total : 9065.52 1133.19 0.00 0.00 1761.39 614.40 4796.04 00:19:21.245 0 00:19:21.245 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:21.245 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:21.245 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:21.245 | .driver_specific 00:19:21.245 | .nvme_error 00:19:21.245 | .status_code 00:19:21.245 | .command_transient_transport_error' 00:19:21.245 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:21.504 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 585 > 0 )) 00:19:21.504 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93819 00:19:21.504 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93819 ']' 00:19:21.504 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93819 00:19:21.504 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:19:21.504 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:21.504 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93819 00:19:21.504 killing process with pid 93819 00:19:21.504 Received shutdown signal, test time was about 2.000000 seconds 00:19:21.504 00:19:21.504 Latency(us) 00:19:21.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.504 =================================================================================================================== 00:19:21.504 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:21.504 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:21.504 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:21.504 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93819' 00:19:21.504 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93819 00:19:21.504 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93819 00:19:21.773 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:19:21.773 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:21.773 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:21.773 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:21.773 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:21.773 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93905 00:19:21.773 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:19:21.773 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93905 /var/tmp/bperf.sock 00:19:21.773 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93905 ']' 00:19:21.773 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:21.773 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:21.773 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:21.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:21.773 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:21.773 16:34:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:21.773 [2024-07-21 16:34:39.803578] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:19:21.773 [2024-07-21 16:34:39.803849] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93905 ] 00:19:21.773 [2024-07-21 16:34:39.939139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.046 [2024-07-21 16:34:40.041167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.612 16:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:22.612 16:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:19:22.612 16:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:22.612 16:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:22.871 16:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:22.871 16:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.871 16:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:22.871 16:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.871 16:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:22.871 16:34:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:23.129 nvme0n1 00:19:23.129 16:34:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:23.129 16:34:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.129 16:34:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:23.129 16:34:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.129 16:34:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:23.129 16:34:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:23.388 Running I/O for 2 seconds... 00:19:23.388 [2024-07-21 16:34:41.430394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190ee5c8 00:19:23.388 [2024-07-21 16:34:41.431163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.388 [2024-07-21 16:34:41.431223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:23.388 [2024-07-21 16:34:41.439102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fac10 00:19:23.388 [2024-07-21 16:34:41.439719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.388 [2024-07-21 16:34:41.439749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:23.388 [2024-07-21 16:34:41.450373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fef90 00:19:23.388 [2024-07-21 16:34:41.451459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.388 [2024-07-21 16:34:41.451489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:23.389 [2024-07-21 16:34:41.459236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f31b8 00:19:23.389 [2024-07-21 16:34:41.460784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.389 [2024-07-21 16:34:41.460812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.389 [2024-07-21 16:34:41.467569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f5be8 00:19:23.389 [2024-07-21 16:34:41.468292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.389 [2024-07-21 16:34:41.468320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:23.389 [2024-07-21 16:34:41.477092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e9e10 00:19:23.389 [2024-07-21 16:34:41.477811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.389 [2024-07-21 16:34:41.477841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:23.389 [2024-07-21 16:34:41.486532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fef90 00:19:23.389 [2024-07-21 16:34:41.487179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.389 [2024-07-21 16:34:41.487209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:23.389 [2024-07-21 16:34:41.497897] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e2c28 00:19:23.389 [2024-07-21 16:34:41.498973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.389 [2024-07-21 16:34:41.499002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:23.389 [2024-07-21 16:34:41.507002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190edd58 00:19:23.389 [2024-07-21 16:34:41.507899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.389 [2024-07-21 16:34:41.507927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:23.389 [2024-07-21 16:34:41.516448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190ee5c8 00:19:23.389 [2024-07-21 16:34:41.517455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.389 [2024-07-21 16:34:41.517482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:23.389 [2024-07-21 16:34:41.526103] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fc128 00:19:23.389 [2024-07-21 16:34:41.526741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.389 [2024-07-21 16:34:41.526773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:23.389 [2024-07-21 16:34:41.536085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f6cc8 00:19:23.389 [2024-07-21 16:34:41.536840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.389 [2024-07-21 16:34:41.536886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:23.389 [2024-07-21 16:34:41.545247] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e01f8 00:19:23.389 [2024-07-21 16:34:41.546248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.389 [2024-07-21 16:34:41.546286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:23.389 [2024-07-21 16:34:41.554484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190ee5c8 00:19:23.389 [2024-07-21 16:34:41.555513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.389 [2024-07-21 16:34:41.555540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:23.389 [2024-07-21 16:34:41.565752] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190ea680 00:19:23.389 [2024-07-21 16:34:41.567378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.389 [2024-07-21 16:34:41.567406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:23.389 [2024-07-21 16:34:41.572559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f6458 00:19:23.389 [2024-07-21 16:34:41.573192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.389 [2024-07-21 16:34:41.573220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:23.389 [2024-07-21 16:34:41.583392] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f4b08 00:19:23.389 [2024-07-21 16:34:41.584224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.389 [2024-07-21 16:34:41.584254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:23.389 [2024-07-21 16:34:41.592351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e12d8 00:19:23.389 [2024-07-21 16:34:41.592976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.389 [2024-07-21 16:34:41.593005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:23.648 [2024-07-21 16:34:41.602199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fb480 00:19:23.648 [2024-07-21 16:34:41.602979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.648 [2024-07-21 16:34:41.603010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:23.648 [2024-07-21 16:34:41.611460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f5378 00:19:23.648 [2024-07-21 16:34:41.612507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.649 [2024-07-21 16:34:41.612535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:23.649 [2024-07-21 16:34:41.620601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e1b48 00:19:23.649 [2024-07-21 16:34:41.621502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.649 [2024-07-21 16:34:41.621529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:23.649 [2024-07-21 16:34:41.629458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e0630 00:19:23.649 [2024-07-21 16:34:41.630192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.649 [2024-07-21 16:34:41.630219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:23.649 [2024-07-21 16:34:41.638705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190eea00 00:19:23.649 [2024-07-21 16:34:41.639589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.649 [2024-07-21 16:34:41.639616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:23.649 [2024-07-21 16:34:41.648435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e4578 00:19:23.649 [2024-07-21 16:34:41.649456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.649 [2024-07-21 16:34:41.649483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:23.649 [2024-07-21 16:34:41.657863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e8088 00:19:23.649 [2024-07-21 16:34:41.658882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.649 [2024-07-21 16:34:41.658909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:23.649 [2024-07-21 16:34:41.666755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e3d08 00:19:23.649 [2024-07-21 16:34:41.667655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.649 [2024-07-21 16:34:41.667682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:23.649 [2024-07-21 16:34:41.675842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190ecc78 00:19:23.649 [2024-07-21 16:34:41.676739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.649 [2024-07-21 16:34:41.676765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:23.649 [2024-07-21 16:34:41.685516] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190ef6a8 00:19:23.649 [2024-07-21 16:34:41.686572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.649 [2024-07-21 16:34:41.686602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:23.649 [2024-07-21 16:34:41.694875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e38d0 00:19:23.649 [2024-07-21 16:34:41.695503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.649 [2024-07-21 16:34:41.695532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:23.649 [2024-07-21 16:34:41.703884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e6738 00:19:23.649 [2024-07-21 16:34:41.704423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.649 [2024-07-21 16:34:41.704452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:23.649 [2024-07-21 16:34:41.714838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190edd58 00:19:23.649 [2024-07-21 16:34:41.716222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.649 [2024-07-21 16:34:41.716248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:23.649 [2024-07-21 16:34:41.721456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fac10 00:19:23.649 [2024-07-21 16:34:41.722067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.649 [2024-07-21 16:34:41.722094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:23.649 [2024-07-21 16:34:41.732603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fc998 00:19:23.649 [2024-07-21 16:34:41.733762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.649 [2024-07-21 16:34:41.733790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:23.649 [2024-07-21 16:34:41.741247] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f1ca0 00:19:23.649 [2024-07-21 16:34:41.742203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.649 [2024-07-21 16:34:41.742231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:23.649 [2024-07-21 16:34:41.750347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e9e10 00:19:23.649 [2024-07-21 16:34:41.751272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.649 [2024-07-21 16:34:41.751304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:23.649 [2024-07-21 16:34:41.759623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e3498 00:19:23.649 [2024-07-21 16:34:41.760160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.649 [2024-07-21 16:34:41.760190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:23.649 [2024-07-21 16:34:41.769318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e0a68 00:19:23.649 [2024-07-21 16:34:41.769982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.649 [2024-07-21 16:34:41.770011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:23.649 [2024-07-21 16:34:41.778416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190ea680 00:19:23.649 [2024-07-21 16:34:41.779358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.649 [2024-07-21 16:34:41.779386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:23.649 [2024-07-21 16:34:41.787565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e01f8 00:19:23.649 [2024-07-21 16:34:41.788392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.649 [2024-07-21 16:34:41.788434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:23.649 [2024-07-21 16:34:41.796151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f96f8 00:19:23.649 [2024-07-21 16:34:41.796835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.649 [2024-07-21 16:34:41.796863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:23.649 [2024-07-21 16:34:41.807151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e7818 00:19:23.649 [2024-07-21 16:34:41.807970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.649 [2024-07-21 16:34:41.807999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:23.649 [2024-07-21 16:34:41.816480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190dece0 00:19:23.649 [2024-07-21 16:34:41.817568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.649 [2024-07-21 16:34:41.817594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:23.649 [2024-07-21 16:34:41.825344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fb8b8 00:19:23.649 [2024-07-21 16:34:41.826303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.649 [2024-07-21 16:34:41.826329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:23.649 [2024-07-21 16:34:41.836441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190de8a8 00:19:23.649 [2024-07-21 16:34:41.838025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.649 [2024-07-21 16:34:41.838051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.649 [2024-07-21 16:34:41.843061] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190ea248 00:19:23.649 [2024-07-21 16:34:41.843902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.649 [2024-07-21 16:34:41.843943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:23.649 [2024-07-21 16:34:41.854182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fa7d8 00:19:23.908 [2024-07-21 16:34:41.855469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.908 [2024-07-21 16:34:41.855508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:23.908 [2024-07-21 16:34:41.861652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190efae0 00:19:23.908 [2024-07-21 16:34:41.862391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.909 [2024-07-21 16:34:41.862431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:23.909 [2024-07-21 16:34:41.872767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190ea248 00:19:23.909 [2024-07-21 16:34:41.874130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.909 [2024-07-21 16:34:41.874156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:23.909 [2024-07-21 16:34:41.879410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e0ea0 00:19:23.909 [2024-07-21 16:34:41.880008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.909 [2024-07-21 16:34:41.880037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:23.909 [2024-07-21 16:34:41.888826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f4298 00:19:23.909 [2024-07-21 16:34:41.889431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.909 [2024-07-21 16:34:41.889460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:23.909 [2024-07-21 16:34:41.901005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fa7d8 00:19:23.909 [2024-07-21 16:34:41.902513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.909 [2024-07-21 16:34:41.902540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:23.909 [2024-07-21 16:34:41.907658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fa3a0 00:19:23.909 [2024-07-21 16:34:41.908400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.909 [2024-07-21 16:34:41.908428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:23.909 [2024-07-21 16:34:41.918790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e7c50 00:19:23.909 [2024-07-21 16:34:41.919930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.909 [2024-07-21 16:34:41.919957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:23.909 [2024-07-21 16:34:41.926222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e3060 00:19:23.909 [2024-07-21 16:34:41.926924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.909 [2024-07-21 16:34:41.926954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:23.909 [2024-07-21 16:34:41.935151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190efae0 00:19:23.909 [2024-07-21 16:34:41.935756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.909 [2024-07-21 16:34:41.935783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:23.909 [2024-07-21 16:34:41.946226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f31b8 00:19:23.909 [2024-07-21 16:34:41.947266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.909 [2024-07-21 16:34:41.947302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:23.909 [2024-07-21 16:34:41.954834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190eee38 00:19:23.909 [2024-07-21 16:34:41.955723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.909 [2024-07-21 16:34:41.955749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:23.909 [2024-07-21 16:34:41.963913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190df550 00:19:23.909 [2024-07-21 16:34:41.964807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.909 [2024-07-21 16:34:41.964834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:23.909 [2024-07-21 16:34:41.973359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190eaef0 00:19:23.909 [2024-07-21 16:34:41.974227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.909 [2024-07-21 16:34:41.974254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:23.909 [2024-07-21 16:34:41.982905] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fb480 00:19:23.909 [2024-07-21 16:34:41.983537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.909 [2024-07-21 16:34:41.983565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:23.909 [2024-07-21 16:34:41.993901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f0350 00:19:23.909 [2024-07-21 16:34:41.995423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.909 [2024-07-21 16:34:41.995462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:23.909 [2024-07-21 16:34:42.000567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fe2e8 00:19:23.909 [2024-07-21 16:34:42.001161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.909 [2024-07-21 16:34:42.001189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:23.909 [2024-07-21 16:34:42.009386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e0ea0 00:19:23.909 [2024-07-21 16:34:42.009959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.909 [2024-07-21 16:34:42.009987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:23.909 [2024-07-21 16:34:42.019106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fa3a0 00:19:23.909 [2024-07-21 16:34:42.019839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.909 [2024-07-21 16:34:42.019867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:23.909 [2024-07-21 16:34:42.028889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190ed0b0 00:19:23.909 [2024-07-21 16:34:42.029777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.909 [2024-07-21 16:34:42.029805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:23.909 [2024-07-21 16:34:42.038331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fd208 00:19:23.909 [2024-07-21 16:34:42.039180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.909 [2024-07-21 16:34:42.039223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:23.909 [2024-07-21 16:34:42.047470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f6458 00:19:23.909 [2024-07-21 16:34:42.048308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.909 [2024-07-21 16:34:42.048351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:23.909 [2024-07-21 16:34:42.056915] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fb480 00:19:23.909 [2024-07-21 16:34:42.057762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.909 [2024-07-21 16:34:42.057803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:23.909 [2024-07-21 16:34:42.065811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fd208 00:19:23.909 [2024-07-21 16:34:42.066552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.909 [2024-07-21 16:34:42.066582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:23.909 [2024-07-21 16:34:42.076719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f0ff8 00:19:23.909 [2024-07-21 16:34:42.077955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.909 [2024-07-21 16:34:42.077982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:23.909 [2024-07-21 16:34:42.085381] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fb8b8 00:19:23.909 [2024-07-21 16:34:42.086384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.909 [2024-07-21 16:34:42.086423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:23.909 [2024-07-21 16:34:42.094511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fc998 00:19:23.909 [2024-07-21 16:34:42.095415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.909 [2024-07-21 16:34:42.095442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:23.909 [2024-07-21 16:34:42.103377] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f0788 00:19:23.909 [2024-07-21 16:34:42.104104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.909 [2024-07-21 16:34:42.104132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:23.909 [2024-07-21 16:34:42.112241] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fac10 00:19:23.909 [2024-07-21 16:34:42.112863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.909 [2024-07-21 16:34:42.112892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:24.169 [2024-07-21 16:34:42.121476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190df118 00:19:24.169 [2024-07-21 16:34:42.122105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.169 [2024-07-21 16:34:42.122133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:24.169 [2024-07-21 16:34:42.132409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190ebfd0 00:19:24.169 [2024-07-21 16:34:42.133190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.169 [2024-07-21 16:34:42.133242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:24.169 [2024-07-21 16:34:42.143418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fc560 00:19:24.169 [2024-07-21 16:34:42.144361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.169 [2024-07-21 16:34:42.144391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:24.169 [2024-07-21 16:34:42.155795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190eff18 00:19:24.169 [2024-07-21 16:34:42.157255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.169 [2024-07-21 16:34:42.157306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:24.169 [2024-07-21 16:34:42.166255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e4578 00:19:24.169 [2024-07-21 16:34:42.167949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.169 [2024-07-21 16:34:42.167977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:24.169 [2024-07-21 16:34:42.173379] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fc128 00:19:24.169 [2024-07-21 16:34:42.174010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.169 [2024-07-21 16:34:42.174038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:24.169 [2024-07-21 16:34:42.184773] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190df550 00:19:24.169 [2024-07-21 16:34:42.185928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.169 [2024-07-21 16:34:42.185955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:24.169 [2024-07-21 16:34:42.193116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e1710 00:19:24.169 [2024-07-21 16:34:42.194607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.169 [2024-07-21 16:34:42.194637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:24.169 [2024-07-21 16:34:42.201454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190ebfd0 00:19:24.169 [2024-07-21 16:34:42.202050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.169 [2024-07-21 16:34:42.202080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:24.169 [2024-07-21 16:34:42.212566] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f7538 00:19:24.169 [2024-07-21 16:34:42.213596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.169 [2024-07-21 16:34:42.213623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:24.169 [2024-07-21 16:34:42.221150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e8088 00:19:24.169 [2024-07-21 16:34:42.222064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.169 [2024-07-21 16:34:42.222091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:24.169 [2024-07-21 16:34:42.231248] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f7100 00:19:24.169 [2024-07-21 16:34:42.231947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.169 [2024-07-21 16:34:42.231976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:24.169 [2024-07-21 16:34:42.242548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f20d8 00:19:24.169 [2024-07-21 16:34:42.244052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.169 [2024-07-21 16:34:42.244079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:24.169 [2024-07-21 16:34:42.249175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e2c28 00:19:24.169 [2024-07-21 16:34:42.249936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.169 [2024-07-21 16:34:42.249964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:24.169 [2024-07-21 16:34:42.258929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f7970 00:19:24.169 [2024-07-21 16:34:42.259827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.169 [2024-07-21 16:34:42.259855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:24.169 [2024-07-21 16:34:42.268404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f9f68 00:19:24.169 [2024-07-21 16:34:42.269271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.169 [2024-07-21 16:34:42.269319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:24.169 [2024-07-21 16:34:42.277504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fe2e8 00:19:24.169 [2024-07-21 16:34:42.278409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.169 [2024-07-21 16:34:42.278448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:24.169 [2024-07-21 16:34:42.287226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e4140 00:19:24.169 [2024-07-21 16:34:42.288235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.169 [2024-07-21 16:34:42.288269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:24.169 [2024-07-21 16:34:42.296552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f6020 00:19:24.169 [2024-07-21 16:34:42.297160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.169 [2024-07-21 16:34:42.297198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:24.169 [2024-07-21 16:34:42.305527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f46d0 00:19:24.169 [2024-07-21 16:34:42.306040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.169 [2024-07-21 16:34:42.306070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:24.169 [2024-07-21 16:34:42.317008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190ebfd0 00:19:24.169 [2024-07-21 16:34:42.318550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.169 [2024-07-21 16:34:42.318579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:24.169 [2024-07-21 16:34:42.323672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190de8a8 00:19:24.169 [2024-07-21 16:34:42.324322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.169 [2024-07-21 16:34:42.324350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:24.169 [2024-07-21 16:34:42.332526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190ff3c8 00:19:24.169 [2024-07-21 16:34:42.333137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.169 [2024-07-21 16:34:42.333165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:24.169 [2024-07-21 16:34:42.342190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190eff18 00:19:24.169 [2024-07-21 16:34:42.342954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.169 [2024-07-21 16:34:42.342982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:24.170 [2024-07-21 16:34:42.353315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e9168 00:19:24.170 [2024-07-21 16:34:42.354610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.170 [2024-07-21 16:34:42.354639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:24.170 [2024-07-21 16:34:42.362848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190ec408 00:19:24.170 [2024-07-21 16:34:42.364125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.170 [2024-07-21 16:34:42.364152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:24.170 [2024-07-21 16:34:42.371733] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e3498 00:19:24.170 [2024-07-21 16:34:42.372920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.170 [2024-07-21 16:34:42.372946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:24.429 [2024-07-21 16:34:42.380801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f2510 00:19:24.429 [2024-07-21 16:34:42.381558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.429 [2024-07-21 16:34:42.381587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:24.429 [2024-07-21 16:34:42.389484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f1430 00:19:24.429 [2024-07-21 16:34:42.390946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.429 [2024-07-21 16:34:42.390974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:24.429 [2024-07-21 16:34:42.399821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190eaab8 00:19:24.429 [2024-07-21 16:34:42.400977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.429 [2024-07-21 16:34:42.401003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:24.429 [2024-07-21 16:34:42.408736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f4f40 00:19:24.429 [2024-07-21 16:34:42.409758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.429 [2024-07-21 16:34:42.409786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:24.429 [2024-07-21 16:34:42.417585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e5658 00:19:24.429 [2024-07-21 16:34:42.418503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.429 [2024-07-21 16:34:42.418531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:24.429 [2024-07-21 16:34:42.426475] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190eb328 00:19:24.429 [2024-07-21 16:34:42.427252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.429 [2024-07-21 16:34:42.427292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:24.429 [2024-07-21 16:34:42.435566] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e5ec8 00:19:24.429 [2024-07-21 16:34:42.436196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.429 [2024-07-21 16:34:42.436224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:24.429 [2024-07-21 16:34:42.447466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e6738 00:19:24.429 [2024-07-21 16:34:42.448980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.429 [2024-07-21 16:34:42.449007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:24.429 [2024-07-21 16:34:42.454108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f4298 00:19:24.429 [2024-07-21 16:34:42.454774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.429 [2024-07-21 16:34:42.454810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:24.429 [2024-07-21 16:34:42.466092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e5220 00:19:24.429 [2024-07-21 16:34:42.467661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.429 [2024-07-21 16:34:42.467694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:24.429 [2024-07-21 16:34:42.472768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e12d8 00:19:24.429 [2024-07-21 16:34:42.473535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.429 [2024-07-21 16:34:42.473562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:24.429 [2024-07-21 16:34:42.483910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fa7d8 00:19:24.429 [2024-07-21 16:34:42.485144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.429 [2024-07-21 16:34:42.485181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:24.429 [2024-07-21 16:34:42.493348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e1b48 00:19:24.429 [2024-07-21 16:34:42.494435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.430 [2024-07-21 16:34:42.494463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:24.430 [2024-07-21 16:34:42.503136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190dfdc0 00:19:24.430 [2024-07-21 16:34:42.504233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.430 [2024-07-21 16:34:42.504296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:24.430 [2024-07-21 16:34:42.513468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190ec840 00:19:24.430 [2024-07-21 16:34:42.514611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.430 [2024-07-21 16:34:42.514657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:24.430 [2024-07-21 16:34:42.523420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f0350 00:19:24.430 [2024-07-21 16:34:42.524192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.430 [2024-07-21 16:34:42.524221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:24.430 [2024-07-21 16:34:42.532174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190ecc78 00:19:24.430 [2024-07-21 16:34:42.533163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.430 [2024-07-21 16:34:42.533192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:24.430 [2024-07-21 16:34:42.542564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f8e88 00:19:24.430 [2024-07-21 16:34:42.543704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.430 [2024-07-21 16:34:42.543732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:24.430 [2024-07-21 16:34:42.552541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190df988 00:19:24.430 [2024-07-21 16:34:42.553625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.430 [2024-07-21 16:34:42.553657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:24.430 [2024-07-21 16:34:42.560442] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e0a68 00:19:24.430 [2024-07-21 16:34:42.561085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.430 [2024-07-21 16:34:42.561112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:24.430 [2024-07-21 16:34:42.572073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f6458 00:19:24.430 [2024-07-21 16:34:42.573303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.430 [2024-07-21 16:34:42.573340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:24.430 [2024-07-21 16:34:42.582057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f3e60 00:19:24.430 [2024-07-21 16:34:42.583386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.430 [2024-07-21 16:34:42.583412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:24.430 [2024-07-21 16:34:42.590344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190de038 00:19:24.430 [2024-07-21 16:34:42.591844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.430 [2024-07-21 16:34:42.591871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:24.430 [2024-07-21 16:34:42.598538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e5a90 00:19:24.430 [2024-07-21 16:34:42.599178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.430 [2024-07-21 16:34:42.599205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:24.430 [2024-07-21 16:34:42.609826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e4140 00:19:24.430 [2024-07-21 16:34:42.611015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.430 [2024-07-21 16:34:42.611042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:24.430 [2024-07-21 16:34:42.620316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190efae0 00:19:24.430 [2024-07-21 16:34:42.621736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.430 [2024-07-21 16:34:42.621763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:24.430 [2024-07-21 16:34:42.629491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f2d80 00:19:24.430 [2024-07-21 16:34:42.630900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.430 [2024-07-21 16:34:42.630928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:24.689 [2024-07-21 16:34:42.638781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f4b08 00:19:24.689 [2024-07-21 16:34:42.640072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.689 [2024-07-21 16:34:42.640100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:24.689 [2024-07-21 16:34:42.648719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f5378 00:19:24.689 [2024-07-21 16:34:42.650000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.689 [2024-07-21 16:34:42.650028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:24.690 [2024-07-21 16:34:42.658730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e1f80 00:19:24.690 [2024-07-21 16:34:42.659560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.690 [2024-07-21 16:34:42.659600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:24.690 [2024-07-21 16:34:42.667811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f81e0 00:19:24.690 [2024-07-21 16:34:42.669339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.690 [2024-07-21 16:34:42.669378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:24.690 [2024-07-21 16:34:42.678574] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e38d0 00:19:24.690 [2024-07-21 16:34:42.679825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.690 [2024-07-21 16:34:42.679852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:24.690 [2024-07-21 16:34:42.687730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e5ec8 00:19:24.690 [2024-07-21 16:34:42.688926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.690 [2024-07-21 16:34:42.688952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:24.690 [2024-07-21 16:34:42.697561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190ebfd0 00:19:24.690 [2024-07-21 16:34:42.698909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.690 [2024-07-21 16:34:42.698936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:24.690 [2024-07-21 16:34:42.706323] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fc998 00:19:24.690 [2024-07-21 16:34:42.707419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.690 [2024-07-21 16:34:42.707447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:24.690 [2024-07-21 16:34:42.715770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f7da8 00:19:24.690 [2024-07-21 16:34:42.716749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.690 [2024-07-21 16:34:42.716776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:24.690 [2024-07-21 16:34:42.724703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e8088 00:19:24.690 [2024-07-21 16:34:42.725691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.690 [2024-07-21 16:34:42.725719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:24.690 [2024-07-21 16:34:42.735891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190ddc00 00:19:24.690 [2024-07-21 16:34:42.737419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.690 [2024-07-21 16:34:42.737445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:24.690 [2024-07-21 16:34:42.742607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fa7d8 00:19:24.690 [2024-07-21 16:34:42.743326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.690 [2024-07-21 16:34:42.743354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:24.690 [2024-07-21 16:34:42.752161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fb8b8 00:19:24.690 [2024-07-21 16:34:42.752876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.690 [2024-07-21 16:34:42.752905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:24.690 [2024-07-21 16:34:42.764432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e27f0 00:19:24.690 [2024-07-21 16:34:42.766055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.690 [2024-07-21 16:34:42.766082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.690 [2024-07-21 16:34:42.771193] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fb8b8 00:19:24.690 [2024-07-21 16:34:42.771913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.690 [2024-07-21 16:34:42.771941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:24.690 [2024-07-21 16:34:42.780097] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190ed0b0 00:19:24.690 [2024-07-21 16:34:42.780801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.690 [2024-07-21 16:34:42.780829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:24.690 [2024-07-21 16:34:42.791239] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fc998 00:19:24.690 [2024-07-21 16:34:42.792490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.690 [2024-07-21 16:34:42.792517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:24.690 [2024-07-21 16:34:42.801097] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190edd58 00:19:24.690 [2024-07-21 16:34:42.802494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.690 [2024-07-21 16:34:42.802522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:24.690 [2024-07-21 16:34:42.807818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fc998 00:19:24.690 [2024-07-21 16:34:42.808423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.690 [2024-07-21 16:34:42.808450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:24.690 [2024-07-21 16:34:42.817407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e95a0 00:19:24.690 [2024-07-21 16:34:42.817996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.690 [2024-07-21 16:34:42.818024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:24.690 [2024-07-21 16:34:42.828808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f3e60 00:19:24.690 [2024-07-21 16:34:42.829923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.690 [2024-07-21 16:34:42.829951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:24.690 [2024-07-21 16:34:42.838128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fac10 00:19:24.690 [2024-07-21 16:34:42.839380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.690 [2024-07-21 16:34:42.839407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:24.690 [2024-07-21 16:34:42.846884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190eff18 00:19:24.690 [2024-07-21 16:34:42.847880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.690 [2024-07-21 16:34:42.847908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:24.690 [2024-07-21 16:34:42.856019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e5658 00:19:24.690 [2024-07-21 16:34:42.857040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.690 [2024-07-21 16:34:42.857067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:24.690 [2024-07-21 16:34:42.865499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e0a68 00:19:24.690 [2024-07-21 16:34:42.866540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.690 [2024-07-21 16:34:42.866568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:24.690 [2024-07-21 16:34:42.876437] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fd208 00:19:24.690 [2024-07-21 16:34:42.877951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.690 [2024-07-21 16:34:42.877979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:24.690 [2024-07-21 16:34:42.883076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f4b08 00:19:24.690 [2024-07-21 16:34:42.883711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.690 [2024-07-21 16:34:42.883737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:24.690 [2024-07-21 16:34:42.894746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fc128 00:19:24.690 [2024-07-21 16:34:42.896151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.690 [2024-07-21 16:34:42.896177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:24.954 [2024-07-21 16:34:42.903878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fa3a0 00:19:24.954 [2024-07-21 16:34:42.905287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.954 [2024-07-21 16:34:42.905313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:24.954 [2024-07-21 16:34:42.913670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fef90 00:19:24.954 [2024-07-21 16:34:42.915199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.954 [2024-07-21 16:34:42.915226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:24.954 [2024-07-21 16:34:42.920456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f46d0 00:19:24.954 [2024-07-21 16:34:42.921100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.954 [2024-07-21 16:34:42.921135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:24.954 [2024-07-21 16:34:42.931727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f6cc8 00:19:24.954 [2024-07-21 16:34:42.932871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.954 [2024-07-21 16:34:42.932900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:24.954 [2024-07-21 16:34:42.940646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f92c0 00:19:24.954 [2024-07-21 16:34:42.941758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.954 [2024-07-21 16:34:42.941785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:24.954 [2024-07-21 16:34:42.949745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f6458 00:19:24.954 [2024-07-21 16:34:42.950675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.954 [2024-07-21 16:34:42.950710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:24.954 [2024-07-21 16:34:42.960958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e73e0 00:19:24.954 [2024-07-21 16:34:42.962582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.954 [2024-07-21 16:34:42.962609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:24.954 [2024-07-21 16:34:42.967718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f5378 00:19:24.954 [2024-07-21 16:34:42.968500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.954 [2024-07-21 16:34:42.968528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:24.954 [2024-07-21 16:34:42.977177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190dfdc0 00:19:24.954 [2024-07-21 16:34:42.977954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.954 [2024-07-21 16:34:42.977981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:24.954 [2024-07-21 16:34:42.986076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fac10 00:19:24.954 [2024-07-21 16:34:42.986754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.954 [2024-07-21 16:34:42.986782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:24.954 [2024-07-21 16:34:42.996962] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190df988 00:19:24.954 [2024-07-21 16:34:42.998139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.954 [2024-07-21 16:34:42.998167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:24.954 [2024-07-21 16:34:43.006450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f1ca0 00:19:24.954 [2024-07-21 16:34:43.007619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.954 [2024-07-21 16:34:43.007646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:24.954 [2024-07-21 16:34:43.013889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f0788 00:19:24.954 [2024-07-21 16:34:43.014578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.954 [2024-07-21 16:34:43.014607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:24.954 [2024-07-21 16:34:43.024554] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e6fa8 00:19:24.954 [2024-07-21 16:34:43.025635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.954 [2024-07-21 16:34:43.025663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:24.954 [2024-07-21 16:34:43.033746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190ed920 00:19:24.954 [2024-07-21 16:34:43.034799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.954 [2024-07-21 16:34:43.034825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:24.954 [2024-07-21 16:34:43.044936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f1430 00:19:24.954 [2024-07-21 16:34:43.046525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.954 [2024-07-21 16:34:43.046552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:24.954 [2024-07-21 16:34:43.051622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f7538 00:19:24.954 [2024-07-21 16:34:43.052442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.954 [2024-07-21 16:34:43.052470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:24.954 [2024-07-21 16:34:43.061440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f7da8 00:19:24.954 [2024-07-21 16:34:43.062405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.954 [2024-07-21 16:34:43.062450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:24.954 [2024-07-21 16:34:43.070971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f2d80 00:19:24.954 [2024-07-21 16:34:43.071910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.954 [2024-07-21 16:34:43.071936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:24.954 [2024-07-21 16:34:43.080054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190ea680 00:19:24.954 [2024-07-21 16:34:43.080984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.954 [2024-07-21 16:34:43.081011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:24.954 [2024-07-21 16:34:43.089361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fda78 00:19:24.954 [2024-07-21 16:34:43.089890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.954 [2024-07-21 16:34:43.089921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:24.954 [2024-07-21 16:34:43.099999] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190ee5c8 00:19:24.954 [2024-07-21 16:34:43.101184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.954 [2024-07-21 16:34:43.101211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:24.954 [2024-07-21 16:34:43.109221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e5658 00:19:24.954 [2024-07-21 16:34:43.110539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.954 [2024-07-21 16:34:43.110567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:24.954 [2024-07-21 16:34:43.117862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f9b30 00:19:24.955 [2024-07-21 16:34:43.118939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.955 [2024-07-21 16:34:43.118967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:24.955 [2024-07-21 16:34:43.126953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e23b8 00:19:24.955 [2024-07-21 16:34:43.128046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.955 [2024-07-21 16:34:43.128073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:24.955 [2024-07-21 16:34:43.136644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fb480 00:19:24.955 [2024-07-21 16:34:43.137841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.955 [2024-07-21 16:34:43.137869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:24.955 [2024-07-21 16:34:43.145749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e1b48 00:19:24.955 [2024-07-21 16:34:43.146840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.955 [2024-07-21 16:34:43.146870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:24.955 [2024-07-21 16:34:43.155898] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e2c28 00:19:24.955 [2024-07-21 16:34:43.156794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.955 [2024-07-21 16:34:43.156840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:25.215 [2024-07-21 16:34:43.165748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190ed920 00:19:25.215 [2024-07-21 16:34:43.166544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.215 [2024-07-21 16:34:43.166576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:25.215 [2024-07-21 16:34:43.178394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e12d8 00:19:25.215 [2024-07-21 16:34:43.179788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.215 [2024-07-21 16:34:43.179814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.215 [2024-07-21 16:34:43.188339] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190dfdc0 00:19:25.215 [2024-07-21 16:34:43.189584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.215 [2024-07-21 16:34:43.189612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:25.215 [2024-07-21 16:34:43.199685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190de038 00:19:25.215 [2024-07-21 16:34:43.201362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.215 [2024-07-21 16:34:43.201389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:25.215 [2024-07-21 16:34:43.206558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190dfdc0 00:19:25.215 [2024-07-21 16:34:43.207485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.215 [2024-07-21 16:34:43.207511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:25.215 [2024-07-21 16:34:43.217902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190ef270 00:19:25.215 [2024-07-21 16:34:43.219372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.215 [2024-07-21 16:34:43.219401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.215 [2024-07-21 16:34:43.227577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190ee190 00:19:25.215 [2024-07-21 16:34:43.228982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.215 [2024-07-21 16:34:43.229009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:25.215 [2024-07-21 16:34:43.235182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f35f0 00:19:25.215 [2024-07-21 16:34:43.236097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.215 [2024-07-21 16:34:43.236123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:25.215 [2024-07-21 16:34:43.244604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190dfdc0 00:19:25.215 [2024-07-21 16:34:43.245504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.215 [2024-07-21 16:34:43.245531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:25.215 [2024-07-21 16:34:43.254176] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f4298 00:19:25.215 [2024-07-21 16:34:43.255071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.215 [2024-07-21 16:34:43.255113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:25.215 [2024-07-21 16:34:43.263160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e88f8 00:19:25.215 [2024-07-21 16:34:43.263908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.215 [2024-07-21 16:34:43.263936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:25.215 [2024-07-21 16:34:43.272390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e1b48 00:19:25.215 [2024-07-21 16:34:43.273122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.215 [2024-07-21 16:34:43.273151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:25.215 [2024-07-21 16:34:43.283648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e8d30 00:19:25.215 [2024-07-21 16:34:43.284942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.215 [2024-07-21 16:34:43.284969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:25.215 [2024-07-21 16:34:43.292002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f6890 00:19:25.215 [2024-07-21 16:34:43.293501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.215 [2024-07-21 16:34:43.293529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:25.215 [2024-07-21 16:34:43.302080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fc560 00:19:25.215 [2024-07-21 16:34:43.302939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.215 [2024-07-21 16:34:43.302985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:25.215 [2024-07-21 16:34:43.310716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190fcdd0 00:19:25.215 [2024-07-21 16:34:43.311743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.215 [2024-07-21 16:34:43.311771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:25.215 [2024-07-21 16:34:43.319965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f7970 00:19:25.215 [2024-07-21 16:34:43.320910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.215 [2024-07-21 16:34:43.320937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:25.215 [2024-07-21 16:34:43.329590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190ebfd0 00:19:25.215 [2024-07-21 16:34:43.330536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.215 [2024-07-21 16:34:43.330564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:25.215 [2024-07-21 16:34:43.338988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e84c0 00:19:25.215 [2024-07-21 16:34:43.340499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.215 [2024-07-21 16:34:43.340526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:25.215 [2024-07-21 16:34:43.349513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f2948 00:19:25.215 [2024-07-21 16:34:43.350698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.215 [2024-07-21 16:34:43.350725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:25.215 [2024-07-21 16:34:43.357056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e27f0 00:19:25.215 [2024-07-21 16:34:43.357689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.215 [2024-07-21 16:34:43.357718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:25.215 [2024-07-21 16:34:43.368489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190ed0b0 00:19:25.215 [2024-07-21 16:34:43.369774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.215 [2024-07-21 16:34:43.369801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:25.215 [2024-07-21 16:34:43.378350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f8e88 00:19:25.215 [2024-07-21 16:34:43.379777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.215 [2024-07-21 16:34:43.379804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:25.215 [2024-07-21 16:34:43.388179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190dece0 00:19:25.215 [2024-07-21 16:34:43.389733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.215 [2024-07-21 16:34:43.389760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:25.215 [2024-07-21 16:34:43.394933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f7100 00:19:25.215 [2024-07-21 16:34:43.395575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.215 [2024-07-21 16:34:43.395602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:25.215 [2024-07-21 16:34:43.406935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190e3d08 00:19:25.215 [2024-07-21 16:34:43.408483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.215 [2024-07-21 16:34:43.408509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:25.215 [2024-07-21 16:34:43.413729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c724f0) with pdu=0x2000190f4f40 00:19:25.215 [2024-07-21 16:34:43.414515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.215 [2024-07-21 16:34:43.414545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:25.473 00:19:25.473 Latency(us) 00:19:25.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.474 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:25.474 nvme0n1 : 2.00 26928.24 105.19 0.00 0.00 4746.94 1936.29 12988.04 00:19:25.474 =================================================================================================================== 00:19:25.474 Total : 26928.24 105.19 0.00 0.00 4746.94 1936.29 12988.04 00:19:25.474 0 00:19:25.474 16:34:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:25.474 16:34:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:25.474 16:34:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:25.474 | .driver_specific 00:19:25.474 | .nvme_error 00:19:25.474 | .status_code 00:19:25.474 | .command_transient_transport_error' 00:19:25.474 16:34:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:25.731 16:34:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 211 > 0 )) 00:19:25.731 16:34:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93905 00:19:25.731 16:34:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93905 ']' 00:19:25.731 16:34:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93905 00:19:25.731 16:34:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:19:25.731 16:34:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:25.731 16:34:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93905 00:19:25.731 killing process with pid 93905 00:19:25.731 Received shutdown signal, test time was about 2.000000 seconds 00:19:25.731 00:19:25.731 Latency(us) 00:19:25.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.731 =================================================================================================================== 00:19:25.731 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:25.731 16:34:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:25.731 16:34:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:25.731 16:34:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93905' 00:19:25.731 16:34:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93905 00:19:25.731 16:34:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93905 00:19:25.989 16:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:19:25.989 16:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:25.989 16:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:25.989 16:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:25.989 16:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:25.989 16:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93995 00:19:25.989 16:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:19:25.989 16:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93995 /var/tmp/bperf.sock 00:19:25.989 16:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93995 ']' 00:19:25.989 16:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:25.989 16:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:25.989 16:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:25.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:25.989 16:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:25.989 16:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:25.989 [2024-07-21 16:34:44.084135] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:19:25.989 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:25.989 Zero copy mechanism will not be used. 00:19:25.989 [2024-07-21 16:34:44.084293] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93995 ] 00:19:26.247 [2024-07-21 16:34:44.221802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.247 [2024-07-21 16:34:44.306494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.812 16:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:26.812 16:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:19:26.812 16:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:26.812 16:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:27.069 16:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:27.070 16:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.070 16:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:27.327 16:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.327 16:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:27.327 16:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:27.598 nvme0n1 00:19:27.598 16:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:27.598 16:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.598 16:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:27.598 16:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.598 16:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:27.598 16:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:27.598 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:27.598 Zero copy mechanism will not be used. 00:19:27.599 Running I/O for 2 seconds... 00:19:27.599 [2024-07-21 16:34:45.659489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.599 [2024-07-21 16:34:45.659777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-07-21 16:34:45.659820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:27.599 [2024-07-21 16:34:45.664598] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.599 [2024-07-21 16:34:45.664866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-07-21 16:34:45.664894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:27.599 [2024-07-21 16:34:45.669718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.599 [2024-07-21 16:34:45.669988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-07-21 16:34:45.670025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:27.599 [2024-07-21 16:34:45.674875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.599 [2024-07-21 16:34:45.675146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-07-21 16:34:45.675177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.599 [2024-07-21 16:34:45.679921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.599 [2024-07-21 16:34:45.680190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-07-21 16:34:45.680230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:27.599 [2024-07-21 16:34:45.685028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.599 [2024-07-21 16:34:45.685321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-07-21 16:34:45.685351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:27.599 [2024-07-21 16:34:45.690136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.599 [2024-07-21 16:34:45.690483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-07-21 16:34:45.690512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:27.599 [2024-07-21 16:34:45.695334] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.599 [2024-07-21 16:34:45.695605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-07-21 16:34:45.695635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.599 [2024-07-21 16:34:45.700383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.599 [2024-07-21 16:34:45.700664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-07-21 16:34:45.700694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:27.599 [2024-07-21 16:34:45.705545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.599 [2024-07-21 16:34:45.705834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-07-21 16:34:45.705865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:27.599 [2024-07-21 16:34:45.710656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.599 [2024-07-21 16:34:45.710938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-07-21 16:34:45.710972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:27.599 [2024-07-21 16:34:45.715712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.599 [2024-07-21 16:34:45.715979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-07-21 16:34:45.716008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.599 [2024-07-21 16:34:45.720901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.599 [2024-07-21 16:34:45.721161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-07-21 16:34:45.721197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:27.599 [2024-07-21 16:34:45.726403] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.599 [2024-07-21 16:34:45.726703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-07-21 16:34:45.726734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:27.599 [2024-07-21 16:34:45.731474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.599 [2024-07-21 16:34:45.731743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-07-21 16:34:45.731768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:27.599 [2024-07-21 16:34:45.736635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.599 [2024-07-21 16:34:45.736896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-07-21 16:34:45.736928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.599 [2024-07-21 16:34:45.741744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.599 [2024-07-21 16:34:45.742012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-07-21 16:34:45.742043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:27.599 [2024-07-21 16:34:45.746866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.599 [2024-07-21 16:34:45.747134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-07-21 16:34:45.747165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:27.599 [2024-07-21 16:34:45.751937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.599 [2024-07-21 16:34:45.752205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-07-21 16:34:45.752241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:27.599 [2024-07-21 16:34:45.757046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.599 [2024-07-21 16:34:45.757359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-07-21 16:34:45.757386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.599 [2024-07-21 16:34:45.762122] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.599 [2024-07-21 16:34:45.762429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-07-21 16:34:45.762457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:27.599 [2024-07-21 16:34:45.767198] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.599 [2024-07-21 16:34:45.767493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-07-21 16:34:45.767521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:27.599 [2024-07-21 16:34:45.772236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.599 [2024-07-21 16:34:45.772516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-07-21 16:34:45.772544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:27.599 [2024-07-21 16:34:45.777387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.599 [2024-07-21 16:34:45.777665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-07-21 16:34:45.777711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.599 [2024-07-21 16:34:45.782461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.599 [2024-07-21 16:34:45.782751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-07-21 16:34:45.782782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:27.599 [2024-07-21 16:34:45.787496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.599 [2024-07-21 16:34:45.787764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-07-21 16:34:45.787796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:27.599 [2024-07-21 16:34:45.792622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.599 [2024-07-21 16:34:45.792889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-07-21 16:34:45.792920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:27.599 [2024-07-21 16:34:45.797747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.599 [2024-07-21 16:34:45.798012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-07-21 16:34:45.798045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.599 [2024-07-21 16:34:45.803002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.599 [2024-07-21 16:34:45.803255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-07-21 16:34:45.803274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:27.858 [2024-07-21 16:34:45.808034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.858 [2024-07-21 16:34:45.808311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.858 [2024-07-21 16:34:45.808335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:27.858 [2024-07-21 16:34:45.813091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.858 [2024-07-21 16:34:45.813387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.858 [2024-07-21 16:34:45.813410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:27.858 [2024-07-21 16:34:45.818158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.858 [2024-07-21 16:34:45.818465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.858 [2024-07-21 16:34:45.818498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.858 [2024-07-21 16:34:45.823173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.858 [2024-07-21 16:34:45.823453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.858 [2024-07-21 16:34:45.823485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:27.858 [2024-07-21 16:34:45.828215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.858 [2024-07-21 16:34:45.828495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.858 [2024-07-21 16:34:45.828525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:27.858 [2024-07-21 16:34:45.833301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.858 [2024-07-21 16:34:45.833579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.858 [2024-07-21 16:34:45.833609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:27.858 [2024-07-21 16:34:45.838320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.858 [2024-07-21 16:34:45.838637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.858 [2024-07-21 16:34:45.838684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.858 [2024-07-21 16:34:45.843456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.858 [2024-07-21 16:34:45.843723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.858 [2024-07-21 16:34:45.843756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:27.858 [2024-07-21 16:34:45.848526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.858 [2024-07-21 16:34:45.848784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.858 [2024-07-21 16:34:45.848803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:27.858 [2024-07-21 16:34:45.853631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.858 [2024-07-21 16:34:45.853921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.858 [2024-07-21 16:34:45.853949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:27.858 [2024-07-21 16:34:45.858883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.858 [2024-07-21 16:34:45.859144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.858 [2024-07-21 16:34:45.859180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.858 [2024-07-21 16:34:45.864009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.858 [2024-07-21 16:34:45.864288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.858 [2024-07-21 16:34:45.864312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:27.858 [2024-07-21 16:34:45.869103] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.858 [2024-07-21 16:34:45.869410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.858 [2024-07-21 16:34:45.869441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:27.858 [2024-07-21 16:34:45.874146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.858 [2024-07-21 16:34:45.874435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.858 [2024-07-21 16:34:45.874480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:27.858 [2024-07-21 16:34:45.879219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.858 [2024-07-21 16:34:45.879504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.858 [2024-07-21 16:34:45.879530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.858 [2024-07-21 16:34:45.884337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.858 [2024-07-21 16:34:45.884606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.858 [2024-07-21 16:34:45.884643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:27.858 [2024-07-21 16:34:45.889386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.858 [2024-07-21 16:34:45.889663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.858 [2024-07-21 16:34:45.889706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:27.858 [2024-07-21 16:34:45.894472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.858 [2024-07-21 16:34:45.894794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.858 [2024-07-21 16:34:45.894824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:27.858 [2024-07-21 16:34:45.899529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.858 [2024-07-21 16:34:45.899780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.858 [2024-07-21 16:34:45.899814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.858 [2024-07-21 16:34:45.904526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.858 [2024-07-21 16:34:45.904792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.858 [2024-07-21 16:34:45.904824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:27.858 [2024-07-21 16:34:45.909512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.858 [2024-07-21 16:34:45.909762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.858 [2024-07-21 16:34:45.909796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:45.914527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:45.914816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:45.914847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:45.919504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:45.919755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:45.919775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:45.924485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:45.924736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:45.924756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:45.929533] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:45.929771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:45.929795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:45.934503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:45.934791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:45.934835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:45.939682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:45.939943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:45.939987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:45.944937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:45.945198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:45.945232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:45.950076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:45.950375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:45.950395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:45.955229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:45.955494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:45.955536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:45.960213] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:45.960479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:45.960527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:45.965235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:45.965501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:45.965543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:45.970160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:45.970453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:45.970496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:45.975168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:45.975429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:45.975471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:45.980160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:45.980424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:45.980473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:45.985196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:45.985458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:45.985498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:45.990189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:45.990483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:45.990526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:45.995221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:45.995484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:45.995526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:46.000323] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:46.000575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:46.000610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:46.005346] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:46.005596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:46.005632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:46.010350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:46.010628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:46.010668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:46.015379] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:46.015629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:46.015648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:46.020361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:46.020617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:46.020635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:46.025345] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:46.025595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:46.025614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:46.030347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:46.030618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:46.030656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:46.035405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:46.035653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:46.035671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:46.040421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:46.040673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:46.040720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:46.045414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:46.045664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:46.045704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:46.050522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:46.050828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:46.050864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:46.055678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:46.055925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:46.055965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:27.859 [2024-07-21 16:34:46.060660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:27.859 [2024-07-21 16:34:46.060912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.859 [2024-07-21 16:34:46.060931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.118 [2024-07-21 16:34:46.065667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.119 [2024-07-21 16:34:46.065916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.119 [2024-07-21 16:34:46.065962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.119 [2024-07-21 16:34:46.070976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.119 [2024-07-21 16:34:46.071264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.119 [2024-07-21 16:34:46.071304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.119 [2024-07-21 16:34:46.075985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.119 [2024-07-21 16:34:46.076242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.119 [2024-07-21 16:34:46.076292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.119 [2024-07-21 16:34:46.080980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.119 [2024-07-21 16:34:46.081230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.119 [2024-07-21 16:34:46.081250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.119 [2024-07-21 16:34:46.085933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.119 [2024-07-21 16:34:46.086183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.119 [2024-07-21 16:34:46.086203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.119 [2024-07-21 16:34:46.090977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.119 [2024-07-21 16:34:46.091228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.119 [2024-07-21 16:34:46.091254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.119 [2024-07-21 16:34:46.095922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.119 [2024-07-21 16:34:46.096172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.119 [2024-07-21 16:34:46.096191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.119 [2024-07-21 16:34:46.101218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.119 [2024-07-21 16:34:46.101503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.119 [2024-07-21 16:34:46.101543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.119 [2024-07-21 16:34:46.106486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.119 [2024-07-21 16:34:46.106784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.119 [2024-07-21 16:34:46.106821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.119 [2024-07-21 16:34:46.111682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.119 [2024-07-21 16:34:46.111933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.119 [2024-07-21 16:34:46.111957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.119 [2024-07-21 16:34:46.117151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.119 [2024-07-21 16:34:46.117447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.119 [2024-07-21 16:34:46.117496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.119 [2024-07-21 16:34:46.122524] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.119 [2024-07-21 16:34:46.122833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.119 [2024-07-21 16:34:46.122863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.119 [2024-07-21 16:34:46.127715] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.119 [2024-07-21 16:34:46.127965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.119 [2024-07-21 16:34:46.127984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.119 [2024-07-21 16:34:46.132879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.119 [2024-07-21 16:34:46.133129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.119 [2024-07-21 16:34:46.133148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.119 [2024-07-21 16:34:46.137852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.119 [2024-07-21 16:34:46.138102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.119 [2024-07-21 16:34:46.138126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.119 [2024-07-21 16:34:46.142995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.119 [2024-07-21 16:34:46.143246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.119 [2024-07-21 16:34:46.143266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.119 [2024-07-21 16:34:46.148225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.119 [2024-07-21 16:34:46.148514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.119 [2024-07-21 16:34:46.148557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.119 [2024-07-21 16:34:46.153422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.119 [2024-07-21 16:34:46.153674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.119 [2024-07-21 16:34:46.153693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.119 [2024-07-21 16:34:46.158529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.119 [2024-07-21 16:34:46.158842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.119 [2024-07-21 16:34:46.158873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.119 [2024-07-21 16:34:46.163651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.119 [2024-07-21 16:34:46.163912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.119 [2024-07-21 16:34:46.163932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.119 [2024-07-21 16:34:46.168768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.119 [2024-07-21 16:34:46.169017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.119 [2024-07-21 16:34:46.169038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.119 [2024-07-21 16:34:46.173879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.119 [2024-07-21 16:34:46.174130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.119 [2024-07-21 16:34:46.174166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.119 [2024-07-21 16:34:46.179107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.119 [2024-07-21 16:34:46.179388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.119 [2024-07-21 16:34:46.179420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.119 [2024-07-21 16:34:46.184071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.119 [2024-07-21 16:34:46.184332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.119 [2024-07-21 16:34:46.184351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.119 [2024-07-21 16:34:46.189390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.119 [2024-07-21 16:34:46.189652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.119 [2024-07-21 16:34:46.189697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.119 [2024-07-21 16:34:46.194467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.119 [2024-07-21 16:34:46.194780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.119 [2024-07-21 16:34:46.194808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.119 [2024-07-21 16:34:46.199675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.119 [2024-07-21 16:34:46.199935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.119 [2024-07-21 16:34:46.199954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.119 [2024-07-21 16:34:46.204821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.119 [2024-07-21 16:34:46.205124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.119 [2024-07-21 16:34:46.205166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.119 [2024-07-21 16:34:46.210371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.119 [2024-07-21 16:34:46.210661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.119 [2024-07-21 16:34:46.210706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.120 [2024-07-21 16:34:46.215836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.120 [2024-07-21 16:34:46.216098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.120 [2024-07-21 16:34:46.216124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.120 [2024-07-21 16:34:46.221363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.120 [2024-07-21 16:34:46.221697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.120 [2024-07-21 16:34:46.221728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.120 [2024-07-21 16:34:46.226922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.120 [2024-07-21 16:34:46.227183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.120 [2024-07-21 16:34:46.227230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.120 [2024-07-21 16:34:46.232151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.120 [2024-07-21 16:34:46.232443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.120 [2024-07-21 16:34:46.232469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.120 [2024-07-21 16:34:46.237367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.120 [2024-07-21 16:34:46.237662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.120 [2024-07-21 16:34:46.237692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.120 [2024-07-21 16:34:46.242591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.120 [2024-07-21 16:34:46.242894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.120 [2024-07-21 16:34:46.242925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.120 [2024-07-21 16:34:46.247755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.120 [2024-07-21 16:34:46.248014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.120 [2024-07-21 16:34:46.248059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.120 [2024-07-21 16:34:46.252839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.120 [2024-07-21 16:34:46.253089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.120 [2024-07-21 16:34:46.253108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.120 [2024-07-21 16:34:46.257836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.120 [2024-07-21 16:34:46.258088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.120 [2024-07-21 16:34:46.258108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.120 [2024-07-21 16:34:46.262960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.120 [2024-07-21 16:34:46.263210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.120 [2024-07-21 16:34:46.263228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.120 [2024-07-21 16:34:46.268019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.120 [2024-07-21 16:34:46.268297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.120 [2024-07-21 16:34:46.268337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.120 [2024-07-21 16:34:46.273175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.120 [2024-07-21 16:34:46.273445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.120 [2024-07-21 16:34:46.273495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.120 [2024-07-21 16:34:46.278204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.120 [2024-07-21 16:34:46.278509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.120 [2024-07-21 16:34:46.278562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.120 [2024-07-21 16:34:46.283398] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.120 [2024-07-21 16:34:46.283658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.120 [2024-07-21 16:34:46.283715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.120 [2024-07-21 16:34:46.288494] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.120 [2024-07-21 16:34:46.288773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.120 [2024-07-21 16:34:46.288814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.120 [2024-07-21 16:34:46.293537] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.120 [2024-07-21 16:34:46.293810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.120 [2024-07-21 16:34:46.293830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.120 [2024-07-21 16:34:46.298751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.120 [2024-07-21 16:34:46.299012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.120 [2024-07-21 16:34:46.299078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.120 [2024-07-21 16:34:46.304000] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.120 [2024-07-21 16:34:46.304250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.120 [2024-07-21 16:34:46.304300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.120 [2024-07-21 16:34:46.309078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.120 [2024-07-21 16:34:46.309352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.120 [2024-07-21 16:34:46.309378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.120 [2024-07-21 16:34:46.314481] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.120 [2024-07-21 16:34:46.314787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.120 [2024-07-21 16:34:46.314845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.120 [2024-07-21 16:34:46.319977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.120 [2024-07-21 16:34:46.320239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.120 [2024-07-21 16:34:46.320315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.379 [2024-07-21 16:34:46.325478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.379 [2024-07-21 16:34:46.325763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.379 [2024-07-21 16:34:46.325790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.379 [2024-07-21 16:34:46.330592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.379 [2024-07-21 16:34:46.330891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.379 [2024-07-21 16:34:46.330919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.379 [2024-07-21 16:34:46.335813] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.379 [2024-07-21 16:34:46.336070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.379 [2024-07-21 16:34:46.336093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.379 [2024-07-21 16:34:46.341342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.379 [2024-07-21 16:34:46.341591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.379 [2024-07-21 16:34:46.341610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.379 [2024-07-21 16:34:46.346473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.379 [2024-07-21 16:34:46.346760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.379 [2024-07-21 16:34:46.346789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.379 [2024-07-21 16:34:46.351807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.379 [2024-07-21 16:34:46.352085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.379 [2024-07-21 16:34:46.352103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.379 [2024-07-21 16:34:46.356891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.379 [2024-07-21 16:34:46.357141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.379 [2024-07-21 16:34:46.357167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.379 [2024-07-21 16:34:46.361927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.379 [2024-07-21 16:34:46.362179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.379 [2024-07-21 16:34:46.362205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.380 [2024-07-21 16:34:46.367034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.380 [2024-07-21 16:34:46.367284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.380 [2024-07-21 16:34:46.367304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.380 [2024-07-21 16:34:46.372053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.380 [2024-07-21 16:34:46.372345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.380 [2024-07-21 16:34:46.372382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.380 [2024-07-21 16:34:46.377155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.380 [2024-07-21 16:34:46.377426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.380 [2024-07-21 16:34:46.377449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.380 [2024-07-21 16:34:46.382178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.380 [2024-07-21 16:34:46.382477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.380 [2024-07-21 16:34:46.382514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.380 [2024-07-21 16:34:46.387311] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.380 [2024-07-21 16:34:46.387562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.380 [2024-07-21 16:34:46.387581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.380 [2024-07-21 16:34:46.392379] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.380 [2024-07-21 16:34:46.392649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.380 [2024-07-21 16:34:46.392713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.380 [2024-07-21 16:34:46.397460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.380 [2024-07-21 16:34:46.397709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.380 [2024-07-21 16:34:46.397727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.380 [2024-07-21 16:34:46.402516] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.380 [2024-07-21 16:34:46.402803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.380 [2024-07-21 16:34:46.402833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.380 [2024-07-21 16:34:46.407498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.380 [2024-07-21 16:34:46.407748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.380 [2024-07-21 16:34:46.407770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.380 [2024-07-21 16:34:46.412558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.380 [2024-07-21 16:34:46.412833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.380 [2024-07-21 16:34:46.412874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.380 [2024-07-21 16:34:46.417670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.380 [2024-07-21 16:34:46.417921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.380 [2024-07-21 16:34:46.417946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.380 [2024-07-21 16:34:46.422751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.380 [2024-07-21 16:34:46.423001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.380 [2024-07-21 16:34:46.423019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.380 [2024-07-21 16:34:46.427907] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.380 [2024-07-21 16:34:46.428157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.380 [2024-07-21 16:34:46.428175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.380 [2024-07-21 16:34:46.432931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.380 [2024-07-21 16:34:46.433183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.380 [2024-07-21 16:34:46.433202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.380 [2024-07-21 16:34:46.437941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.380 [2024-07-21 16:34:46.438192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.380 [2024-07-21 16:34:46.438215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.380 [2024-07-21 16:34:46.442975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.380 [2024-07-21 16:34:46.443225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.380 [2024-07-21 16:34:46.443243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.380 [2024-07-21 16:34:46.447977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.380 [2024-07-21 16:34:46.448227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.380 [2024-07-21 16:34:46.448246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.380 [2024-07-21 16:34:46.453048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.380 [2024-07-21 16:34:46.453311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.380 [2024-07-21 16:34:46.453329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.380 [2024-07-21 16:34:46.458056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.380 [2024-07-21 16:34:46.458316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.380 [2024-07-21 16:34:46.458359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.380 [2024-07-21 16:34:46.463138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.380 [2024-07-21 16:34:46.463400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.380 [2024-07-21 16:34:46.463420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.380 [2024-07-21 16:34:46.468182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.380 [2024-07-21 16:34:46.468469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.380 [2024-07-21 16:34:46.468507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.380 [2024-07-21 16:34:46.473248] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.380 [2024-07-21 16:34:46.473510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.380 [2024-07-21 16:34:46.473528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.380 [2024-07-21 16:34:46.478298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.380 [2024-07-21 16:34:46.478582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.380 [2024-07-21 16:34:46.478625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.380 [2024-07-21 16:34:46.483365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.380 [2024-07-21 16:34:46.483616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.380 [2024-07-21 16:34:46.483636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.380 [2024-07-21 16:34:46.488394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.380 [2024-07-21 16:34:46.488655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.380 [2024-07-21 16:34:46.488704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.380 [2024-07-21 16:34:46.493452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.380 [2024-07-21 16:34:46.493705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.380 [2024-07-21 16:34:46.493732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.380 [2024-07-21 16:34:46.498476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.380 [2024-07-21 16:34:46.498749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.380 [2024-07-21 16:34:46.498769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.380 [2024-07-21 16:34:46.503469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.380 [2024-07-21 16:34:46.503721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.380 [2024-07-21 16:34:46.503739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.380 [2024-07-21 16:34:46.508566] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.381 [2024-07-21 16:34:46.508833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.381 [2024-07-21 16:34:46.508867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.381 [2024-07-21 16:34:46.513539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.381 [2024-07-21 16:34:46.513788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.381 [2024-07-21 16:34:46.513808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.381 [2024-07-21 16:34:46.518666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.381 [2024-07-21 16:34:46.518923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.381 [2024-07-21 16:34:46.518946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.381 [2024-07-21 16:34:46.523714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.381 [2024-07-21 16:34:46.523973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.381 [2024-07-21 16:34:46.524013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.381 [2024-07-21 16:34:46.528846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.381 [2024-07-21 16:34:46.529097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.381 [2024-07-21 16:34:46.529116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.381 [2024-07-21 16:34:46.533875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.381 [2024-07-21 16:34:46.534135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.381 [2024-07-21 16:34:46.534162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.381 [2024-07-21 16:34:46.538979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.381 [2024-07-21 16:34:46.539230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.381 [2024-07-21 16:34:46.539249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.381 [2024-07-21 16:34:46.543965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.381 [2024-07-21 16:34:46.544213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.381 [2024-07-21 16:34:46.544237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.381 [2024-07-21 16:34:46.549046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.381 [2024-07-21 16:34:46.549319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.381 [2024-07-21 16:34:46.549354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.381 [2024-07-21 16:34:46.554049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.381 [2024-07-21 16:34:46.554309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.381 [2024-07-21 16:34:46.554367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.381 [2024-07-21 16:34:46.559116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.381 [2024-07-21 16:34:46.559377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.381 [2024-07-21 16:34:46.559396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.381 [2024-07-21 16:34:46.564103] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.381 [2024-07-21 16:34:46.564391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.381 [2024-07-21 16:34:46.564410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.381 [2024-07-21 16:34:46.569195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.381 [2024-07-21 16:34:46.569480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.381 [2024-07-21 16:34:46.569517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.381 [2024-07-21 16:34:46.574276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.381 [2024-07-21 16:34:46.574563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.381 [2024-07-21 16:34:46.574585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.381 [2024-07-21 16:34:46.579334] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.381 [2024-07-21 16:34:46.579592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.381 [2024-07-21 16:34:46.579631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.381 [2024-07-21 16:34:46.584402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.381 [2024-07-21 16:34:46.584676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.381 [2024-07-21 16:34:46.584700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.640 [2024-07-21 16:34:46.589470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.640 [2024-07-21 16:34:46.589720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.640 [2024-07-21 16:34:46.589738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.640 [2024-07-21 16:34:46.594485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.640 [2024-07-21 16:34:46.594774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.640 [2024-07-21 16:34:46.594815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.640 [2024-07-21 16:34:46.599563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.640 [2024-07-21 16:34:46.599814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.640 [2024-07-21 16:34:46.599840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.640 [2024-07-21 16:34:46.604584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.640 [2024-07-21 16:34:46.604847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.640 [2024-07-21 16:34:46.604866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.640 [2024-07-21 16:34:46.609625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.640 [2024-07-21 16:34:46.609874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.640 [2024-07-21 16:34:46.609896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.640 [2024-07-21 16:34:46.614700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.640 [2024-07-21 16:34:46.614950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.640 [2024-07-21 16:34:46.614968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.640 [2024-07-21 16:34:46.619780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.640 [2024-07-21 16:34:46.620029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.640 [2024-07-21 16:34:46.620047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.640 [2024-07-21 16:34:46.624858] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.640 [2024-07-21 16:34:46.625107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.640 [2024-07-21 16:34:46.625126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.640 [2024-07-21 16:34:46.629888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.640 [2024-07-21 16:34:46.630137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.640 [2024-07-21 16:34:46.630155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.640 [2024-07-21 16:34:46.634885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.640 [2024-07-21 16:34:46.635135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.640 [2024-07-21 16:34:46.635154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.640 [2024-07-21 16:34:46.639971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.640 [2024-07-21 16:34:46.640220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.640 [2024-07-21 16:34:46.640239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.640 [2024-07-21 16:34:46.644997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.640 [2024-07-21 16:34:46.645247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.640 [2024-07-21 16:34:46.645266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.640 [2024-07-21 16:34:46.650006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.640 [2024-07-21 16:34:46.650256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.640 [2024-07-21 16:34:46.650285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.640 [2024-07-21 16:34:46.655159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.640 [2024-07-21 16:34:46.655421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.640 [2024-07-21 16:34:46.655439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.640 [2024-07-21 16:34:46.660163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.640 [2024-07-21 16:34:46.660449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.640 [2024-07-21 16:34:46.660468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.640 [2024-07-21 16:34:46.665183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.640 [2024-07-21 16:34:46.665445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.640 [2024-07-21 16:34:46.665480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.640 [2024-07-21 16:34:46.670252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.640 [2024-07-21 16:34:46.670547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.640 [2024-07-21 16:34:46.670583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.640 [2024-07-21 16:34:46.675338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.640 [2024-07-21 16:34:46.675589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.640 [2024-07-21 16:34:46.675612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.640 [2024-07-21 16:34:46.680344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.640 [2024-07-21 16:34:46.680606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.640 [2024-07-21 16:34:46.680630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.640 [2024-07-21 16:34:46.685404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.640 [2024-07-21 16:34:46.685655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.640 [2024-07-21 16:34:46.685678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.640 [2024-07-21 16:34:46.690372] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.640 [2024-07-21 16:34:46.690650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.640 [2024-07-21 16:34:46.690688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.640 [2024-07-21 16:34:46.695458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.640 [2024-07-21 16:34:46.695719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.640 [2024-07-21 16:34:46.695748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.640 [2024-07-21 16:34:46.700463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.640 [2024-07-21 16:34:46.700747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.640 [2024-07-21 16:34:46.700781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.640 [2024-07-21 16:34:46.705560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.640 [2024-07-21 16:34:46.705810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.640 [2024-07-21 16:34:46.705839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.640 [2024-07-21 16:34:46.710678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.640 [2024-07-21 16:34:46.710948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.640 [2024-07-21 16:34:46.710992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.640 [2024-07-21 16:34:46.715720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.640 [2024-07-21 16:34:46.715971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.640 [2024-07-21 16:34:46.715990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.640 [2024-07-21 16:34:46.720750] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.640 [2024-07-21 16:34:46.721009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.640 [2024-07-21 16:34:46.721028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.640 [2024-07-21 16:34:46.725814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.640 [2024-07-21 16:34:46.726063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.641 [2024-07-21 16:34:46.726083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.641 [2024-07-21 16:34:46.731064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.641 [2024-07-21 16:34:46.731335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.641 [2024-07-21 16:34:46.731361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.641 [2024-07-21 16:34:46.736113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.641 [2024-07-21 16:34:46.736396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.641 [2024-07-21 16:34:46.736426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.641 [2024-07-21 16:34:46.741245] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.641 [2024-07-21 16:34:46.741494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.641 [2024-07-21 16:34:46.741514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.641 [2024-07-21 16:34:46.746315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.641 [2024-07-21 16:34:46.746612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.641 [2024-07-21 16:34:46.746662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.641 [2024-07-21 16:34:46.751458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.641 [2024-07-21 16:34:46.751709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.641 [2024-07-21 16:34:46.751727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.641 [2024-07-21 16:34:46.756532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.641 [2024-07-21 16:34:46.756800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.641 [2024-07-21 16:34:46.756819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.641 [2024-07-21 16:34:46.761597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.641 [2024-07-21 16:34:46.761847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.641 [2024-07-21 16:34:46.761868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.641 [2024-07-21 16:34:46.766714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.641 [2024-07-21 16:34:46.766965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.641 [2024-07-21 16:34:46.766986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.641 [2024-07-21 16:34:46.771736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.641 [2024-07-21 16:34:46.771993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.641 [2024-07-21 16:34:46.772012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.641 [2024-07-21 16:34:46.776804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.641 [2024-07-21 16:34:46.777070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.641 [2024-07-21 16:34:46.777114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.641 [2024-07-21 16:34:46.781895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.641 [2024-07-21 16:34:46.782145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.641 [2024-07-21 16:34:46.782163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.641 [2024-07-21 16:34:46.787002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.641 [2024-07-21 16:34:46.787252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.641 [2024-07-21 16:34:46.787271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.641 [2024-07-21 16:34:46.792004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.641 [2024-07-21 16:34:46.792254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.641 [2024-07-21 16:34:46.792319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.641 [2024-07-21 16:34:46.797104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.641 [2024-07-21 16:34:46.797391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.641 [2024-07-21 16:34:46.797410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.641 [2024-07-21 16:34:46.802134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.641 [2024-07-21 16:34:46.802420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.641 [2024-07-21 16:34:46.802439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.641 [2024-07-21 16:34:46.807204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.641 [2024-07-21 16:34:46.807476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.641 [2024-07-21 16:34:46.807513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.641 [2024-07-21 16:34:46.812318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.641 [2024-07-21 16:34:46.812589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.641 [2024-07-21 16:34:46.812631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.641 [2024-07-21 16:34:46.817411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.641 [2024-07-21 16:34:46.817661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.641 [2024-07-21 16:34:46.817679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.641 [2024-07-21 16:34:46.822434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.641 [2024-07-21 16:34:46.822726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.641 [2024-07-21 16:34:46.822768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.641 [2024-07-21 16:34:46.827544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.641 [2024-07-21 16:34:46.827795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.641 [2024-07-21 16:34:46.827813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.641 [2024-07-21 16:34:46.832522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.641 [2024-07-21 16:34:46.832793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.641 [2024-07-21 16:34:46.832835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.641 [2024-07-21 16:34:46.837677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.641 [2024-07-21 16:34:46.837927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.641 [2024-07-21 16:34:46.837945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.641 [2024-07-21 16:34:46.842753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.641 [2024-07-21 16:34:46.843012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.641 [2024-07-21 16:34:46.843031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.900 [2024-07-21 16:34:46.847846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.900 [2024-07-21 16:34:46.848096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.900 [2024-07-21 16:34:46.848116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.900 [2024-07-21 16:34:46.852952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.900 [2024-07-21 16:34:46.853201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.900 [2024-07-21 16:34:46.853220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.900 [2024-07-21 16:34:46.857956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.900 [2024-07-21 16:34:46.858206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.900 [2024-07-21 16:34:46.858230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.900 [2024-07-21 16:34:46.862979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.900 [2024-07-21 16:34:46.863229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.900 [2024-07-21 16:34:46.863248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.900 [2024-07-21 16:34:46.868027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.900 [2024-07-21 16:34:46.868302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.900 [2024-07-21 16:34:46.868338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.900 [2024-07-21 16:34:46.873075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.900 [2024-07-21 16:34:46.873338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.900 [2024-07-21 16:34:46.873358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.900 [2024-07-21 16:34:46.878056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.900 [2024-07-21 16:34:46.878318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.900 [2024-07-21 16:34:46.878377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.900 [2024-07-21 16:34:46.883195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.900 [2024-07-21 16:34:46.883472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.900 [2024-07-21 16:34:46.883509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.900 [2024-07-21 16:34:46.888308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.900 [2024-07-21 16:34:46.888570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.900 [2024-07-21 16:34:46.888590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.900 [2024-07-21 16:34:46.893358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.900 [2024-07-21 16:34:46.893610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.900 [2024-07-21 16:34:46.893630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.900 [2024-07-21 16:34:46.898378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.900 [2024-07-21 16:34:46.898656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.900 [2024-07-21 16:34:46.898706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.900 [2024-07-21 16:34:46.903408] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.900 [2024-07-21 16:34:46.903659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.900 [2024-07-21 16:34:46.903678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.900 [2024-07-21 16:34:46.908402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.900 [2024-07-21 16:34:46.908677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.900 [2024-07-21 16:34:46.908713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.900 [2024-07-21 16:34:46.913630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.900 [2024-07-21 16:34:46.913880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.900 [2024-07-21 16:34:46.913900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.900 [2024-07-21 16:34:46.918771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.900 [2024-07-21 16:34:46.919023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.900 [2024-07-21 16:34:46.919067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.900 [2024-07-21 16:34:46.923857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.900 [2024-07-21 16:34:46.924107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.900 [2024-07-21 16:34:46.924126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.900 [2024-07-21 16:34:46.929052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.900 [2024-07-21 16:34:46.929335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.900 [2024-07-21 16:34:46.929371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.900 [2024-07-21 16:34:46.934333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.900 [2024-07-21 16:34:46.934633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.900 [2024-07-21 16:34:46.934686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.900 [2024-07-21 16:34:46.939623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.900 [2024-07-21 16:34:46.939885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.900 [2024-07-21 16:34:46.939904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.900 [2024-07-21 16:34:46.944887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.901 [2024-07-21 16:34:46.945156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.901 [2024-07-21 16:34:46.945175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.901 [2024-07-21 16:34:46.950037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.901 [2024-07-21 16:34:46.950297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.901 [2024-07-21 16:34:46.950331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.901 [2024-07-21 16:34:46.955135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.901 [2024-07-21 16:34:46.955395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.901 [2024-07-21 16:34:46.955414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.901 [2024-07-21 16:34:46.960066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.901 [2024-07-21 16:34:46.960362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.901 [2024-07-21 16:34:46.960382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.901 [2024-07-21 16:34:46.965234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.901 [2024-07-21 16:34:46.965521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.901 [2024-07-21 16:34:46.965563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.901 [2024-07-21 16:34:46.970368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.901 [2024-07-21 16:34:46.970640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.901 [2024-07-21 16:34:46.970692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.901 [2024-07-21 16:34:46.975750] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.901 [2024-07-21 16:34:46.976011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.901 [2024-07-21 16:34:46.976035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.901 [2024-07-21 16:34:46.980737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.901 [2024-07-21 16:34:46.980987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.901 [2024-07-21 16:34:46.981007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.901 [2024-07-21 16:34:46.985715] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.901 [2024-07-21 16:34:46.985964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.901 [2024-07-21 16:34:46.985983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.901 [2024-07-21 16:34:46.990762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.901 [2024-07-21 16:34:46.991008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.901 [2024-07-21 16:34:46.991028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.901 [2024-07-21 16:34:46.995757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.901 [2024-07-21 16:34:46.996007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.901 [2024-07-21 16:34:46.996025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.901 [2024-07-21 16:34:47.000779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.901 [2024-07-21 16:34:47.001029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.901 [2024-07-21 16:34:47.001062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.901 [2024-07-21 16:34:47.005774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.901 [2024-07-21 16:34:47.006021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.901 [2024-07-21 16:34:47.006040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.901 [2024-07-21 16:34:47.010811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.901 [2024-07-21 16:34:47.011059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.901 [2024-07-21 16:34:47.011077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.901 [2024-07-21 16:34:47.015843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.901 [2024-07-21 16:34:47.016092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.901 [2024-07-21 16:34:47.016110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.901 [2024-07-21 16:34:47.020867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.901 [2024-07-21 16:34:47.021117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.901 [2024-07-21 16:34:47.021135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.901 [2024-07-21 16:34:47.025893] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.901 [2024-07-21 16:34:47.026142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.901 [2024-07-21 16:34:47.026162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.901 [2024-07-21 16:34:47.030951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.901 [2024-07-21 16:34:47.031199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.901 [2024-07-21 16:34:47.031219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.901 [2024-07-21 16:34:47.035968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.901 [2024-07-21 16:34:47.036216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.901 [2024-07-21 16:34:47.036236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.901 [2024-07-21 16:34:47.041015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.901 [2024-07-21 16:34:47.041271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.901 [2024-07-21 16:34:47.041301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.901 [2024-07-21 16:34:47.045988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.901 [2024-07-21 16:34:47.046247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.901 [2024-07-21 16:34:47.046296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.901 [2024-07-21 16:34:47.051052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.901 [2024-07-21 16:34:47.051320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.901 [2024-07-21 16:34:47.051343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.901 [2024-07-21 16:34:47.056112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.901 [2024-07-21 16:34:47.056405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.901 [2024-07-21 16:34:47.056424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.901 [2024-07-21 16:34:47.061144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.901 [2024-07-21 16:34:47.061416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.901 [2024-07-21 16:34:47.061452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.901 [2024-07-21 16:34:47.066151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.901 [2024-07-21 16:34:47.066437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.901 [2024-07-21 16:34:47.066457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.901 [2024-07-21 16:34:47.071174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.901 [2024-07-21 16:34:47.071440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.902 [2024-07-21 16:34:47.071480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.902 [2024-07-21 16:34:47.076246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.902 [2024-07-21 16:34:47.076537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.902 [2024-07-21 16:34:47.076579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.902 [2024-07-21 16:34:47.081360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.902 [2024-07-21 16:34:47.081610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.902 [2024-07-21 16:34:47.081628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.902 [2024-07-21 16:34:47.086409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.902 [2024-07-21 16:34:47.086733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.902 [2024-07-21 16:34:47.086764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.902 [2024-07-21 16:34:47.091479] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.902 [2024-07-21 16:34:47.091729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.902 [2024-07-21 16:34:47.091763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.902 [2024-07-21 16:34:47.096567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.902 [2024-07-21 16:34:47.096834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.902 [2024-07-21 16:34:47.096886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.902 [2024-07-21 16:34:47.101729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:28.902 [2024-07-21 16:34:47.101985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.902 [2024-07-21 16:34:47.102020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.160 [2024-07-21 16:34:47.106815] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.160 [2024-07-21 16:34:47.107066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.160 [2024-07-21 16:34:47.107112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.160 [2024-07-21 16:34:47.111851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.160 [2024-07-21 16:34:47.112100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.160 [2024-07-21 16:34:47.112118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.160 [2024-07-21 16:34:47.116961] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.160 [2024-07-21 16:34:47.117210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.160 [2024-07-21 16:34:47.117234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.160 [2024-07-21 16:34:47.122007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.160 [2024-07-21 16:34:47.122256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.160 [2024-07-21 16:34:47.122316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.160 [2024-07-21 16:34:47.127062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.160 [2024-07-21 16:34:47.127321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.160 [2024-07-21 16:34:47.127373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.160 [2024-07-21 16:34:47.132048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.160 [2024-07-21 16:34:47.132340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.160 [2024-07-21 16:34:47.132382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.160 [2024-07-21 16:34:47.137358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.160 [2024-07-21 16:34:47.137619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.160 [2024-07-21 16:34:47.137646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.160 [2024-07-21 16:34:47.142382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.160 [2024-07-21 16:34:47.142642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.160 [2024-07-21 16:34:47.142677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.161 [2024-07-21 16:34:47.147416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.161 [2024-07-21 16:34:47.147665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.161 [2024-07-21 16:34:47.147685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.161 [2024-07-21 16:34:47.152401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.161 [2024-07-21 16:34:47.152662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.161 [2024-07-21 16:34:47.152712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.161 [2024-07-21 16:34:47.157509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.161 [2024-07-21 16:34:47.157766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.161 [2024-07-21 16:34:47.157798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.161 [2024-07-21 16:34:47.162534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.161 [2024-07-21 16:34:47.162822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.161 [2024-07-21 16:34:47.162857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.161 [2024-07-21 16:34:47.167528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.161 [2024-07-21 16:34:47.167778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.161 [2024-07-21 16:34:47.167798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.161 [2024-07-21 16:34:47.172568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.161 [2024-07-21 16:34:47.172842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.161 [2024-07-21 16:34:47.172870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.161 [2024-07-21 16:34:47.177641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.161 [2024-07-21 16:34:47.177900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.161 [2024-07-21 16:34:47.177924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.161 [2024-07-21 16:34:47.182617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.161 [2024-07-21 16:34:47.182891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.161 [2024-07-21 16:34:47.182911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.161 [2024-07-21 16:34:47.187581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.161 [2024-07-21 16:34:47.187840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.161 [2024-07-21 16:34:47.187868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.161 [2024-07-21 16:34:47.192567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.161 [2024-07-21 16:34:47.192841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.161 [2024-07-21 16:34:47.192864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.161 [2024-07-21 16:34:47.197562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.161 [2024-07-21 16:34:47.197819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.161 [2024-07-21 16:34:47.197837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.161 [2024-07-21 16:34:47.202585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.161 [2024-07-21 16:34:47.202860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.161 [2024-07-21 16:34:47.202902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.161 [2024-07-21 16:34:47.207705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.161 [2024-07-21 16:34:47.207954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.161 [2024-07-21 16:34:47.207973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.161 [2024-07-21 16:34:47.212838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.161 [2024-07-21 16:34:47.213086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.161 [2024-07-21 16:34:47.213105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.161 [2024-07-21 16:34:47.217939] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.161 [2024-07-21 16:34:47.218187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.161 [2024-07-21 16:34:47.218206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.161 [2024-07-21 16:34:47.223075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.161 [2024-07-21 16:34:47.223350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.161 [2024-07-21 16:34:47.223386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.161 [2024-07-21 16:34:47.228373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.161 [2024-07-21 16:34:47.228640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.161 [2024-07-21 16:34:47.228699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.161 [2024-07-21 16:34:47.233915] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.161 [2024-07-21 16:34:47.234176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.161 [2024-07-21 16:34:47.234227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.161 [2024-07-21 16:34:47.239576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.161 [2024-07-21 16:34:47.239895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.161 [2024-07-21 16:34:47.239921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.161 [2024-07-21 16:34:47.245210] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.161 [2024-07-21 16:34:47.245544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.161 [2024-07-21 16:34:47.245588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.161 [2024-07-21 16:34:47.250717] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.161 [2024-07-21 16:34:47.250966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.161 [2024-07-21 16:34:47.251031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.161 [2024-07-21 16:34:47.256014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.161 [2024-07-21 16:34:47.256264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.161 [2024-07-21 16:34:47.256344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.161 [2024-07-21 16:34:47.261245] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.161 [2024-07-21 16:34:47.261534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.161 [2024-07-21 16:34:47.261592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.161 [2024-07-21 16:34:47.266448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.161 [2024-07-21 16:34:47.266748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.161 [2024-07-21 16:34:47.266781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.161 [2024-07-21 16:34:47.271599] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.161 [2024-07-21 16:34:47.271873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.161 [2024-07-21 16:34:47.271930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.161 [2024-07-21 16:34:47.276838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.161 [2024-07-21 16:34:47.277100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.161 [2024-07-21 16:34:47.277136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.161 [2024-07-21 16:34:47.282043] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.161 [2024-07-21 16:34:47.282291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.161 [2024-07-21 16:34:47.282309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.161 [2024-07-21 16:34:47.287204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.161 [2024-07-21 16:34:47.287537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.161 [2024-07-21 16:34:47.287572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.161 [2024-07-21 16:34:47.292360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.161 [2024-07-21 16:34:47.292608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.161 [2024-07-21 16:34:47.292649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.162 [2024-07-21 16:34:47.297493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.162 [2024-07-21 16:34:47.297742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.162 [2024-07-21 16:34:47.297778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.162 [2024-07-21 16:34:47.302580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.162 [2024-07-21 16:34:47.302845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.162 [2024-07-21 16:34:47.302896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.162 [2024-07-21 16:34:47.307901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.162 [2024-07-21 16:34:47.308172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.162 [2024-07-21 16:34:47.308199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.162 [2024-07-21 16:34:47.312931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.162 [2024-07-21 16:34:47.313190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.162 [2024-07-21 16:34:47.313220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.162 [2024-07-21 16:34:47.318232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.162 [2024-07-21 16:34:47.318541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.162 [2024-07-21 16:34:47.318569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.162 [2024-07-21 16:34:47.323493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.162 [2024-07-21 16:34:47.323780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.162 [2024-07-21 16:34:47.323829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.162 [2024-07-21 16:34:47.328575] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.162 [2024-07-21 16:34:47.328848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.162 [2024-07-21 16:34:47.328869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.162 [2024-07-21 16:34:47.334025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.162 [2024-07-21 16:34:47.334304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.162 [2024-07-21 16:34:47.334368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.162 [2024-07-21 16:34:47.339401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.162 [2024-07-21 16:34:47.339693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.162 [2024-07-21 16:34:47.339727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.162 [2024-07-21 16:34:47.344851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.162 [2024-07-21 16:34:47.345101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.162 [2024-07-21 16:34:47.345120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.162 [2024-07-21 16:34:47.350191] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.162 [2024-07-21 16:34:47.350571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.162 [2024-07-21 16:34:47.350605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.162 [2024-07-21 16:34:47.355558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.162 [2024-07-21 16:34:47.355826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.162 [2024-07-21 16:34:47.355867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.162 [2024-07-21 16:34:47.361099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.162 [2024-07-21 16:34:47.361426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.162 [2024-07-21 16:34:47.361459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.162 [2024-07-21 16:34:47.366492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.162 [2024-07-21 16:34:47.366819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.162 [2024-07-21 16:34:47.366851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.420 [2024-07-21 16:34:47.371592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.420 [2024-07-21 16:34:47.371876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.420 [2024-07-21 16:34:47.371903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.420 [2024-07-21 16:34:47.376809] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.420 [2024-07-21 16:34:47.377084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.420 [2024-07-21 16:34:47.377112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.420 [2024-07-21 16:34:47.381891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.420 [2024-07-21 16:34:47.382138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.420 [2024-07-21 16:34:47.382174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.420 [2024-07-21 16:34:47.386979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.420 [2024-07-21 16:34:47.387256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.420 [2024-07-21 16:34:47.387293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.420 [2024-07-21 16:34:47.392126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.420 [2024-07-21 16:34:47.392430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.420 [2024-07-21 16:34:47.392455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.420 [2024-07-21 16:34:47.397335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.420 [2024-07-21 16:34:47.397614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.420 [2024-07-21 16:34:47.397645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.420 [2024-07-21 16:34:47.402565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.420 [2024-07-21 16:34:47.402862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.420 [2024-07-21 16:34:47.402893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.420 [2024-07-21 16:34:47.407699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.420 [2024-07-21 16:34:47.408013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.420 [2024-07-21 16:34:47.408044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.420 [2024-07-21 16:34:47.413092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.420 [2024-07-21 16:34:47.413394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.421 [2024-07-21 16:34:47.413433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.421 [2024-07-21 16:34:47.418074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.421 [2024-07-21 16:34:47.418406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.421 [2024-07-21 16:34:47.418438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.421 [2024-07-21 16:34:47.423150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.421 [2024-07-21 16:34:47.423413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.421 [2024-07-21 16:34:47.423469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.421 [2024-07-21 16:34:47.428456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.421 [2024-07-21 16:34:47.428740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.421 [2024-07-21 16:34:47.428769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.421 [2024-07-21 16:34:47.433469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.421 [2024-07-21 16:34:47.433752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.421 [2024-07-21 16:34:47.433783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.421 [2024-07-21 16:34:47.438479] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.421 [2024-07-21 16:34:47.438784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.421 [2024-07-21 16:34:47.438816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.421 [2024-07-21 16:34:47.443689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.421 [2024-07-21 16:34:47.443938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.421 [2024-07-21 16:34:47.443973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.421 [2024-07-21 16:34:47.448720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.421 [2024-07-21 16:34:47.448996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.421 [2024-07-21 16:34:47.449027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.421 [2024-07-21 16:34:47.453696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.421 [2024-07-21 16:34:47.453956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.421 [2024-07-21 16:34:47.453996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.421 [2024-07-21 16:34:47.458825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.421 [2024-07-21 16:34:47.459101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.421 [2024-07-21 16:34:47.459136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.421 [2024-07-21 16:34:47.463910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.421 [2024-07-21 16:34:47.464159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.421 [2024-07-21 16:34:47.464194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.421 [2024-07-21 16:34:47.468866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.421 [2024-07-21 16:34:47.469115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.421 [2024-07-21 16:34:47.469151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.421 [2024-07-21 16:34:47.474129] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.421 [2024-07-21 16:34:47.474425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.421 [2024-07-21 16:34:47.474461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.421 [2024-07-21 16:34:47.479135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.421 [2024-07-21 16:34:47.479399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.421 [2024-07-21 16:34:47.479459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.421 [2024-07-21 16:34:47.484177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.421 [2024-07-21 16:34:47.484438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.421 [2024-07-21 16:34:47.484457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.421 [2024-07-21 16:34:47.489225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.421 [2024-07-21 16:34:47.489524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.421 [2024-07-21 16:34:47.489555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.421 [2024-07-21 16:34:47.494231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.421 [2024-07-21 16:34:47.494571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.421 [2024-07-21 16:34:47.494604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.421 [2024-07-21 16:34:47.499378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.421 [2024-07-21 16:34:47.499661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.421 [2024-07-21 16:34:47.499697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.421 [2024-07-21 16:34:47.504774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.421 [2024-07-21 16:34:47.505078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.421 [2024-07-21 16:34:47.505118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.421 [2024-07-21 16:34:47.510025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.421 [2024-07-21 16:34:47.510286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.421 [2024-07-21 16:34:47.510358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.421 [2024-07-21 16:34:47.515228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.421 [2024-07-21 16:34:47.515563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.421 [2024-07-21 16:34:47.515595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.421 [2024-07-21 16:34:47.520419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.421 [2024-07-21 16:34:47.520670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.421 [2024-07-21 16:34:47.520710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.421 [2024-07-21 16:34:47.525439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.421 [2024-07-21 16:34:47.525690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.421 [2024-07-21 16:34:47.525713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.421 [2024-07-21 16:34:47.530610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.421 [2024-07-21 16:34:47.530892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.421 [2024-07-21 16:34:47.530949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.421 [2024-07-21 16:34:47.535783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.421 [2024-07-21 16:34:47.536068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.421 [2024-07-21 16:34:47.536100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.421 [2024-07-21 16:34:47.540832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.421 [2024-07-21 16:34:47.541096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.421 [2024-07-21 16:34:47.541132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.421 [2024-07-21 16:34:47.545888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.421 [2024-07-21 16:34:47.546138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.421 [2024-07-21 16:34:47.546172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.421 [2024-07-21 16:34:47.550902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.421 [2024-07-21 16:34:47.551176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.421 [2024-07-21 16:34:47.551211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.421 [2024-07-21 16:34:47.555929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.421 [2024-07-21 16:34:47.556192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.421 [2024-07-21 16:34:47.556212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.421 [2024-07-21 16:34:47.560892] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.422 [2024-07-21 16:34:47.561142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.422 [2024-07-21 16:34:47.561161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.422 [2024-07-21 16:34:47.565850] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.422 [2024-07-21 16:34:47.566098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.422 [2024-07-21 16:34:47.566140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.422 [2024-07-21 16:34:47.570935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.422 [2024-07-21 16:34:47.571183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.422 [2024-07-21 16:34:47.571241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.422 [2024-07-21 16:34:47.576072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.422 [2024-07-21 16:34:47.576345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.422 [2024-07-21 16:34:47.576390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.422 [2024-07-21 16:34:47.581097] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.422 [2024-07-21 16:34:47.581356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.422 [2024-07-21 16:34:47.581392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.422 [2024-07-21 16:34:47.586059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.422 [2024-07-21 16:34:47.586337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.422 [2024-07-21 16:34:47.586381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.422 [2024-07-21 16:34:47.591043] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.422 [2024-07-21 16:34:47.591320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.422 [2024-07-21 16:34:47.591339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.422 [2024-07-21 16:34:47.596006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.422 [2024-07-21 16:34:47.596293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.422 [2024-07-21 16:34:47.596313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.422 [2024-07-21 16:34:47.600962] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.422 [2024-07-21 16:34:47.601225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.422 [2024-07-21 16:34:47.601246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.422 [2024-07-21 16:34:47.606030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.422 [2024-07-21 16:34:47.606337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.422 [2024-07-21 16:34:47.606365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.422 [2024-07-21 16:34:47.611236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.422 [2024-07-21 16:34:47.611534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.422 [2024-07-21 16:34:47.611563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.422 [2024-07-21 16:34:47.616359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.422 [2024-07-21 16:34:47.616624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.422 [2024-07-21 16:34:47.616644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.422 [2024-07-21 16:34:47.621431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.422 [2024-07-21 16:34:47.621704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.422 [2024-07-21 16:34:47.621746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.422 [2024-07-21 16:34:47.626515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.422 [2024-07-21 16:34:47.626809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.422 [2024-07-21 16:34:47.626842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.680 [2024-07-21 16:34:47.631603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.680 [2024-07-21 16:34:47.631862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.680 [2024-07-21 16:34:47.631903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.680 [2024-07-21 16:34:47.636675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.680 [2024-07-21 16:34:47.636933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.680 [2024-07-21 16:34:47.636992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.680 [2024-07-21 16:34:47.641777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.680 [2024-07-21 16:34:47.642025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.680 [2024-07-21 16:34:47.642043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.680 [2024-07-21 16:34:47.646833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c72690) with pdu=0x2000190fef90 00:19:29.680 [2024-07-21 16:34:47.647094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.680 [2024-07-21 16:34:47.647116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.680 00:19:29.680 Latency(us) 00:19:29.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.680 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:29.680 nvme0n1 : 2.00 6047.47 755.93 0.00 0.00 2640.60 1921.40 9770.82 00:19:29.680 =================================================================================================================== 00:19:29.680 Total : 6047.47 755.93 0.00 0.00 2640.60 1921.40 9770.82 00:19:29.680 0 00:19:29.680 16:34:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:29.680 16:34:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:29.680 16:34:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:29.680 | .driver_specific 00:19:29.680 | .nvme_error 00:19:29.680 | .status_code 00:19:29.680 | .command_transient_transport_error' 00:19:29.680 16:34:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:29.937 16:34:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 390 > 0 )) 00:19:29.937 16:34:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93995 00:19:29.937 16:34:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93995 ']' 00:19:29.937 16:34:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93995 00:19:29.937 16:34:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:19:29.937 16:34:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:29.937 16:34:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93995 00:19:29.937 killing process with pid 93995 00:19:29.937 Received shutdown signal, test time was about 2.000000 seconds 00:19:29.937 00:19:29.937 Latency(us) 00:19:29.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.937 =================================================================================================================== 00:19:29.937 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:29.937 16:34:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:29.937 16:34:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:29.937 16:34:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93995' 00:19:29.937 16:34:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93995 00:19:29.937 16:34:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93995 00:19:30.195 16:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 93685 00:19:30.195 16:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93685 ']' 00:19:30.195 16:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93685 00:19:30.195 16:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:19:30.195 16:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:30.195 16:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93685 00:19:30.195 16:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:30.195 16:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:30.195 16:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93685' 00:19:30.195 killing process with pid 93685 00:19:30.195 16:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93685 00:19:30.195 16:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93685 00:19:30.452 ************************************ 00:19:30.452 END TEST nvmf_digest_error 00:19:30.452 ************************************ 00:19:30.452 00:19:30.452 real 0m18.490s 00:19:30.452 user 0m34.671s 00:19:30.452 sys 0m4.824s 00:19:30.452 16:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:30.452 16:34:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:30.452 16:34:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:19:30.452 16:34:48 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:30.452 16:34:48 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:19:30.452 16:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:30.452 16:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:19:30.452 16:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:30.452 16:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:19:30.452 16:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:30.452 16:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:30.452 rmmod nvme_tcp 00:19:30.710 rmmod nvme_fabrics 00:19:30.710 rmmod nvme_keyring 00:19:30.710 16:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:30.710 16:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:19:30.710 16:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:19:30.710 16:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 93685 ']' 00:19:30.710 16:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 93685 00:19:30.710 16:34:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 93685 ']' 00:19:30.710 16:34:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 93685 00:19:30.710 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (93685) - No such process 00:19:30.710 Process with pid 93685 is not found 00:19:30.710 16:34:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 93685 is not found' 00:19:30.710 16:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:30.710 16:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:30.710 16:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:30.710 16:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:30.710 16:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:30.710 16:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.710 16:34:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:30.710 16:34:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.710 16:34:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:30.710 00:19:30.710 real 0m38.135s 00:19:30.710 user 1m10.409s 00:19:30.710 sys 0m9.997s 00:19:30.710 16:34:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:30.710 16:34:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:30.710 ************************************ 00:19:30.710 END TEST nvmf_digest 00:19:30.710 ************************************ 00:19:30.710 16:34:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:30.710 16:34:48 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 1 -eq 1 ]] 00:19:30.710 16:34:48 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ tcp == \t\c\p ]] 00:19:30.710 16:34:48 nvmf_tcp -- nvmf/nvmf.sh@113 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:19:30.710 16:34:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:30.710 16:34:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:30.710 16:34:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:30.710 ************************************ 00:19:30.710 START TEST nvmf_mdns_discovery 00:19:30.710 ************************************ 00:19:30.710 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:19:30.710 * Looking for test storage... 00:19:30.711 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:30.711 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.978 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:30.978 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:30.978 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:30.978 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:30.978 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:30.978 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:30.978 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.978 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.978 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:30.978 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:30.978 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:30.978 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:30.978 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:30.978 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.978 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:30.978 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:30.978 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:30.978 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:30.978 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:30.978 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:30.978 Cannot find device "nvmf_tgt_br" 00:19:30.978 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:19:30.978 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:30.978 Cannot find device "nvmf_tgt_br2" 00:19:30.978 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:19:30.978 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:30.978 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:30.978 Cannot find device "nvmf_tgt_br" 00:19:30.978 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:19:30.978 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:30.978 Cannot find device "nvmf_tgt_br2" 00:19:30.978 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:19:30.978 16:34:48 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:30.978 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:30.978 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:30.978 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.978 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:19:30.978 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:30.978 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:30.978 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:19:30.978 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:30.978 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:30.978 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:30.978 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:30.978 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:30.978 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:30.978 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:30.978 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:30.978 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:30.978 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:30.978 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:30.978 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:30.978 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:30.978 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:30.978 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:30.978 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:30.978 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:31.238 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:31.238 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:31.238 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:31.238 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:31.238 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:31.238 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:31.238 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:31.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:31.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:19:31.238 00:19:31.238 --- 10.0.0.2 ping statistics --- 00:19:31.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.238 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:19:31.238 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:31.238 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:31.238 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:19:31.238 00:19:31.238 --- 10.0.0.3 ping statistics --- 00:19:31.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.238 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:19:31.238 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:31.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:31.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:31.238 00:19:31.238 --- 10.0.0.1 ping statistics --- 00:19:31.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.238 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:31.238 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:31.238 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:19:31.238 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:31.238 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:31.238 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:31.238 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:31.238 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:31.238 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:31.238 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:31.238 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:31.238 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:31.238 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:31.238 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:31.238 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=94291 00:19:31.238 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:31.238 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 94291 00:19:31.238 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 94291 ']' 00:19:31.239 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.239 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:31.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.239 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.239 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:31.239 16:34:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:31.239 [2024-07-21 16:34:49.334726] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:19:31.239 [2024-07-21 16:34:49.334819] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.496 [2024-07-21 16:34:49.475570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.496 [2024-07-21 16:34:49.579486] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.496 [2024-07-21 16:34:49.579566] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.496 [2024-07-21 16:34:49.579581] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:31.496 [2024-07-21 16:34:49.579591] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:31.496 [2024-07-21 16:34:49.579601] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.496 [2024-07-21 16:34:49.579633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.074 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:32.074 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:19:32.074 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:32.074 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:32.074 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.074 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.074 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:19:32.075 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.075 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.075 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.075 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:19:32.075 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.075 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.333 [2024-07-21 16:34:50.387888] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.333 [2024-07-21 16:34:50.396066] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.333 null0 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.333 null1 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.333 null2 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.333 null3 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=94342 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 94342 /tmp/host.sock 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 94342 ']' 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:32.333 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:32.333 16:34:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.333 [2024-07-21 16:34:50.505937] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:19:32.333 [2024-07-21 16:34:50.506047] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94342 ] 00:19:32.591 [2024-07-21 16:34:50.645470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.591 [2024-07-21 16:34:50.753152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.526 16:34:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:33.526 16:34:51 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:19:33.526 16:34:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:19:33.526 16:34:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:19:33.526 16:34:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:19:33.526 16:34:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=94371 00:19:33.526 16:34:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:19:33.526 16:34:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:19:33.526 16:34:51 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:19:33.526 Process 986 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:19:33.526 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:19:33.526 Successfully dropped root privileges. 00:19:33.526 avahi-daemon 0.8 starting up. 00:19:33.526 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:19:33.526 Successfully called chroot(). 00:19:33.526 Successfully dropped remaining capabilities. 00:19:33.526 No service file found in /etc/avahi/services. 00:19:34.484 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:19:34.484 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:19:34.484 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:19:34.484 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:19:34.484 Network interface enumeration completed. 00:19:34.484 Registering new address record for fe80::587a:63ff:fef9:f6a7 on nvmf_tgt_if2.*. 00:19:34.484 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:19:34.484 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:19:34.484 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:19:34.484 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 374021019. 00:19:34.484 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:34.484 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.484 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.484 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.484 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:34.484 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.484 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.484 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.484 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:19:34.484 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:19:34.484 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:34.484 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:34.484 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.484 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:34.484 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.484 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:34.484 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.484 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:19:34.484 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:19:34.484 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:34.484 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.484 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.484 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:34.484 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:34.484 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.754 [2024-07-21 16:34:52.918827] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:34.754 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.012 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:19:35.012 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:35.012 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.012 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.012 [2024-07-21 16:34:52.996497] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:35.012 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.012 16:34:52 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:35.012 16:34:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.012 16:34:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.012 16:34:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.012 16:34:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:19:35.012 16:34:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.012 16:34:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.012 16:34:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.012 16:34:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:19:35.012 16:34:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.012 16:34:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.012 16:34:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.012 16:34:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:19:35.012 16:34:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.012 16:34:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.012 16:34:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.012 16:34:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:19:35.012 16:34:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.012 16:34:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.012 [2024-07-21 16:34:53.036395] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:35.012 16:34:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.012 16:34:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:19:35.012 16:34:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.012 16:34:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.012 [2024-07-21 16:34:53.044402] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:35.012 16:34:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.012 16:34:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:19:35.012 16:34:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.012 16:34:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.012 16:34:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.012 16:34:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:19:35.958 [2024-07-21 16:34:53.818828] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:19:36.214 [2024-07-21 16:34:54.418844] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:19:36.214 [2024-07-21 16:34:54.418869] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:19:36.214 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:36.214 cookie is 0 00:19:36.214 is_local: 1 00:19:36.214 our_own: 0 00:19:36.214 wide_area: 0 00:19:36.214 multicast: 1 00:19:36.214 cached: 1 00:19:36.471 [2024-07-21 16:34:54.518836] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:19:36.471 [2024-07-21 16:34:54.518859] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:19:36.471 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:36.471 cookie is 0 00:19:36.471 is_local: 1 00:19:36.471 our_own: 0 00:19:36.471 wide_area: 0 00:19:36.471 multicast: 1 00:19:36.471 cached: 1 00:19:36.471 [2024-07-21 16:34:54.518868] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:19:36.471 [2024-07-21 16:34:54.618837] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:19:36.471 [2024-07-21 16:34:54.618859] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:19:36.471 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:36.471 cookie is 0 00:19:36.471 is_local: 1 00:19:36.471 our_own: 0 00:19:36.471 wide_area: 0 00:19:36.471 multicast: 1 00:19:36.471 cached: 1 00:19:36.727 [2024-07-21 16:34:54.718837] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:19:36.727 [2024-07-21 16:34:54.718859] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:19:36.727 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:36.727 cookie is 0 00:19:36.727 is_local: 1 00:19:36.727 our_own: 0 00:19:36.727 wide_area: 0 00:19:36.727 multicast: 1 00:19:36.727 cached: 1 00:19:36.727 [2024-07-21 16:34:54.718868] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:19:37.292 [2024-07-21 16:34:55.431831] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:37.292 [2024-07-21 16:34:55.431856] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:37.292 [2024-07-21 16:34:55.431874] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:37.550 [2024-07-21 16:34:55.517930] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:19:37.550 [2024-07-21 16:34:55.574771] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:19:37.550 [2024-07-21 16:34:55.574799] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:19:37.550 [2024-07-21 16:34:55.631469] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:37.550 [2024-07-21 16:34:55.631492] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:37.550 [2024-07-21 16:34:55.631509] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:37.550 [2024-07-21 16:34:55.717575] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:19:37.807 [2024-07-21 16:34:55.773169] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:19:37.807 [2024-07-21 16:34:55.773197] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:40.332 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:40.333 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:19:40.333 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:40.333 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.333 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.333 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.333 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:19:40.333 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:19:40.333 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:40.333 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.333 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.333 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:19:40.333 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.333 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:19:40.333 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:19:40.333 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:19:40.333 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:40.333 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.333 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.333 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.333 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:19:40.333 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.333 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.333 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.333 16:34:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:19:41.707 16:34:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:19:41.707 16:34:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:41.707 16:34:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.707 16:34:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:41.707 16:34:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.707 16:34:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:41.707 16:34:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:41.707 16:34:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.707 16:34:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:41.707 16:34:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:19:41.707 16:34:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:41.707 16:34:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:19:41.707 16:34:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.707 16:34:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.707 16:34:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.707 16:34:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:19:41.707 16:34:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:19:41.707 16:34:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:19:41.707 16:34:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:19:41.707 16:34:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.707 16:34:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.707 [2024-07-21 16:34:59.626503] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:41.707 [2024-07-21 16:34:59.627160] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:41.707 [2024-07-21 16:34:59.627207] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:41.707 [2024-07-21 16:34:59.627247] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:41.707 [2024-07-21 16:34:59.627262] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:41.707 16:34:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.707 16:34:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:19:41.707 16:34:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.707 16:34:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.707 [2024-07-21 16:34:59.634489] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:41.707 [2024-07-21 16:34:59.635165] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:41.707 [2024-07-21 16:34:59.635262] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:41.707 16:34:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.707 16:34:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:19:41.707 [2024-07-21 16:34:59.766238] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:19:41.707 [2024-07-21 16:34:59.766448] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:19:41.707 [2024-07-21 16:34:59.825491] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:19:41.707 [2024-07-21 16:34:59.825519] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:19:41.708 [2024-07-21 16:34:59.825543] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:19:41.708 [2024-07-21 16:34:59.825560] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:41.708 [2024-07-21 16:34:59.825622] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:19:41.708 [2024-07-21 16:34:59.825633] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:41.708 [2024-07-21 16:34:59.825637] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:41.708 [2024-07-21 16:34:59.825651] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:41.708 [2024-07-21 16:34:59.871341] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:19:41.708 [2024-07-21 16:34:59.871365] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:19:41.708 [2024-07-21 16:34:59.871429] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:41.708 [2024-07-21 16:34:59.871440] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:42.639 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:42.897 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.897 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:42.897 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:19:42.897 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:19:42.897 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.897 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.897 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:19:42.897 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.897 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:19:42.897 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:19:42.897 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:19:42.897 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:42.897 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.897 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.897 [2024-07-21 16:35:00.951468] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:42.897 [2024-07-21 16:35:00.951503] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:42.897 [2024-07-21 16:35:00.951539] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:42.897 [2024-07-21 16:35:00.951553] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:42.897 [2024-07-21 16:35:00.954139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.897 [2024-07-21 16:35:00.954191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.897 [2024-07-21 16:35:00.954204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.897 [2024-07-21 16:35:00.954213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.897 [2024-07-21 16:35:00.954222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.897 [2024-07-21 16:35:00.954231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.897 [2024-07-21 16:35:00.954240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.897 [2024-07-21 16:35:00.954248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.898 [2024-07-21 16:35:00.954257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1203480 is same with the state(5) to be set 00:19:42.898 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.898 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:19:42.898 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.898 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.898 [2024-07-21 16:35:00.959489] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:42.898 [2024-07-21 16:35:00.959570] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:42.898 [2024-07-21 16:35:00.962032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.898 [2024-07-21 16:35:00.962081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.898 [2024-07-21 16:35:00.962094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.898 [2024-07-21 16:35:00.962103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.898 [2024-07-21 16:35:00.962112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.898 [2024-07-21 16:35:00.962120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.898 [2024-07-21 16:35:00.962129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.898 [2024-07-21 16:35:00.962137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.898 [2024-07-21 16:35:00.962145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb370 is same with the state(5) to be set 00:19:42.898 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.898 16:35:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:19:42.898 [2024-07-21 16:35:00.964102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1203480 (9): Bad file descriptor 00:19:42.898 [2024-07-21 16:35:00.972000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb370 (9): Bad file descriptor 00:19:42.898 [2024-07-21 16:35:00.974124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:42.898 [2024-07-21 16:35:00.974238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.898 [2024-07-21 16:35:00.974290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1203480 with addr=10.0.0.2, port=4420 00:19:42.898 [2024-07-21 16:35:00.974304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1203480 is same with the state(5) to be set 00:19:42.898 [2024-07-21 16:35:00.974323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1203480 (9): Bad file descriptor 00:19:42.898 [2024-07-21 16:35:00.974340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:42.898 [2024-07-21 16:35:00.974360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:42.898 [2024-07-21 16:35:00.974375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:42.898 [2024-07-21 16:35:00.974391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.898 [2024-07-21 16:35:00.982010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:42.898 [2024-07-21 16:35:00.982115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.898 [2024-07-21 16:35:00.982138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb370 with addr=10.0.0.3, port=4420 00:19:42.898 [2024-07-21 16:35:00.982149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb370 is same with the state(5) to be set 00:19:42.898 [2024-07-21 16:35:00.982166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb370 (9): Bad file descriptor 00:19:42.898 [2024-07-21 16:35:00.982182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:42.898 [2024-07-21 16:35:00.982191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:42.898 [2024-07-21 16:35:00.982199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:42.898 [2024-07-21 16:35:00.982230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.898 [2024-07-21 16:35:00.984194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:42.898 [2024-07-21 16:35:00.984309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.898 [2024-07-21 16:35:00.984332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1203480 with addr=10.0.0.2, port=4420 00:19:42.898 [2024-07-21 16:35:00.984343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1203480 is same with the state(5) to be set 00:19:42.898 [2024-07-21 16:35:00.984360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1203480 (9): Bad file descriptor 00:19:42.898 [2024-07-21 16:35:00.984375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:42.898 [2024-07-21 16:35:00.984385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:42.898 [2024-07-21 16:35:00.984393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:42.898 [2024-07-21 16:35:00.984408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.898 [2024-07-21 16:35:00.992081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:42.898 [2024-07-21 16:35:00.992184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.898 [2024-07-21 16:35:00.992206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb370 with addr=10.0.0.3, port=4420 00:19:42.898 [2024-07-21 16:35:00.992217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb370 is same with the state(5) to be set 00:19:42.898 [2024-07-21 16:35:00.992234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb370 (9): Bad file descriptor 00:19:42.898 [2024-07-21 16:35:00.992249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:42.898 [2024-07-21 16:35:00.992277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:42.898 [2024-07-21 16:35:00.992289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:42.898 [2024-07-21 16:35:00.992305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.898 [2024-07-21 16:35:00.994264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:42.898 [2024-07-21 16:35:00.994401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.898 [2024-07-21 16:35:00.994423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1203480 with addr=10.0.0.2, port=4420 00:19:42.898 [2024-07-21 16:35:00.994435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1203480 is same with the state(5) to be set 00:19:42.898 [2024-07-21 16:35:00.994452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1203480 (9): Bad file descriptor 00:19:42.898 [2024-07-21 16:35:00.994467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:42.898 [2024-07-21 16:35:00.994476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:42.898 [2024-07-21 16:35:00.994485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:42.898 [2024-07-21 16:35:00.994500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.898 [2024-07-21 16:35:01.002153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:42.898 [2024-07-21 16:35:01.002254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.898 [2024-07-21 16:35:01.002305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb370 with addr=10.0.0.3, port=4420 00:19:42.898 [2024-07-21 16:35:01.002318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb370 is same with the state(5) to be set 00:19:42.898 [2024-07-21 16:35:01.002336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb370 (9): Bad file descriptor 00:19:42.898 [2024-07-21 16:35:01.002361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:42.898 [2024-07-21 16:35:01.002374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:42.898 [2024-07-21 16:35:01.002383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:42.898 [2024-07-21 16:35:01.002399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.898 [2024-07-21 16:35:01.004361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:42.898 [2024-07-21 16:35:01.004448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.898 [2024-07-21 16:35:01.004470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1203480 with addr=10.0.0.2, port=4420 00:19:42.898 [2024-07-21 16:35:01.004480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1203480 is same with the state(5) to be set 00:19:42.898 [2024-07-21 16:35:01.004497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1203480 (9): Bad file descriptor 00:19:42.898 [2024-07-21 16:35:01.004511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:42.898 [2024-07-21 16:35:01.004521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:42.898 [2024-07-21 16:35:01.004530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:42.898 [2024-07-21 16:35:01.004625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.898 [2024-07-21 16:35:01.012222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:42.898 [2024-07-21 16:35:01.012336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.898 [2024-07-21 16:35:01.012359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb370 with addr=10.0.0.3, port=4420 00:19:42.898 [2024-07-21 16:35:01.012371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb370 is same with the state(5) to be set 00:19:42.898 [2024-07-21 16:35:01.012388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb370 (9): Bad file descriptor 00:19:42.898 [2024-07-21 16:35:01.012403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:42.898 [2024-07-21 16:35:01.012412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:42.898 [2024-07-21 16:35:01.012421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:42.898 [2024-07-21 16:35:01.012436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.898 [2024-07-21 16:35:01.014417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:42.898 [2024-07-21 16:35:01.014503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.898 [2024-07-21 16:35:01.014525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1203480 with addr=10.0.0.2, port=4420 00:19:42.898 [2024-07-21 16:35:01.014536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1203480 is same with the state(5) to be set 00:19:42.899 [2024-07-21 16:35:01.014553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1203480 (9): Bad file descriptor 00:19:42.899 [2024-07-21 16:35:01.014585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:42.899 [2024-07-21 16:35:01.014597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:42.899 [2024-07-21 16:35:01.014606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:42.899 [2024-07-21 16:35:01.014637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.899 [2024-07-21 16:35:01.022302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:42.899 [2024-07-21 16:35:01.022439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.899 [2024-07-21 16:35:01.022465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb370 with addr=10.0.0.3, port=4420 00:19:42.899 [2024-07-21 16:35:01.022478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb370 is same with the state(5) to be set 00:19:42.899 [2024-07-21 16:35:01.022495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb370 (9): Bad file descriptor 00:19:42.899 [2024-07-21 16:35:01.022510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:42.899 [2024-07-21 16:35:01.022520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:42.899 [2024-07-21 16:35:01.022529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:42.899 [2024-07-21 16:35:01.022544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.899 [2024-07-21 16:35:01.024473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:42.899 [2024-07-21 16:35:01.024560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.899 [2024-07-21 16:35:01.024582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1203480 with addr=10.0.0.2, port=4420 00:19:42.899 [2024-07-21 16:35:01.024593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1203480 is same with the state(5) to be set 00:19:42.899 [2024-07-21 16:35:01.024610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1203480 (9): Bad file descriptor 00:19:42.899 [2024-07-21 16:35:01.024642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:42.899 [2024-07-21 16:35:01.024655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:42.899 [2024-07-21 16:35:01.024663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:42.899 [2024-07-21 16:35:01.024694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.899 [2024-07-21 16:35:01.032382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:42.899 [2024-07-21 16:35:01.032483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.899 [2024-07-21 16:35:01.032505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb370 with addr=10.0.0.3, port=4420 00:19:42.899 [2024-07-21 16:35:01.032516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb370 is same with the state(5) to be set 00:19:42.899 [2024-07-21 16:35:01.032533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb370 (9): Bad file descriptor 00:19:42.899 [2024-07-21 16:35:01.032548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:42.899 [2024-07-21 16:35:01.032558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:42.899 [2024-07-21 16:35:01.032566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:42.899 [2024-07-21 16:35:01.032581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.899 [2024-07-21 16:35:01.034528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:42.899 [2024-07-21 16:35:01.034618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.899 [2024-07-21 16:35:01.034640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1203480 with addr=10.0.0.2, port=4420 00:19:42.899 [2024-07-21 16:35:01.034652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1203480 is same with the state(5) to be set 00:19:42.899 [2024-07-21 16:35:01.034669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1203480 (9): Bad file descriptor 00:19:42.899 [2024-07-21 16:35:01.034732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:42.899 [2024-07-21 16:35:01.034761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:42.899 [2024-07-21 16:35:01.034792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:42.899 [2024-07-21 16:35:01.034808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.899 [2024-07-21 16:35:01.042452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:42.899 [2024-07-21 16:35:01.042557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.899 [2024-07-21 16:35:01.042579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb370 with addr=10.0.0.3, port=4420 00:19:42.899 [2024-07-21 16:35:01.042590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb370 is same with the state(5) to be set 00:19:42.899 [2024-07-21 16:35:01.042607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb370 (9): Bad file descriptor 00:19:42.899 [2024-07-21 16:35:01.042623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:42.899 [2024-07-21 16:35:01.042632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:42.899 [2024-07-21 16:35:01.042641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:42.899 [2024-07-21 16:35:01.042656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.899 [2024-07-21 16:35:01.044584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:42.899 [2024-07-21 16:35:01.044667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.899 [2024-07-21 16:35:01.044688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1203480 with addr=10.0.0.2, port=4420 00:19:42.899 [2024-07-21 16:35:01.044699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1203480 is same with the state(5) to be set 00:19:42.899 [2024-07-21 16:35:01.044715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1203480 (9): Bad file descriptor 00:19:42.899 [2024-07-21 16:35:01.044746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:42.899 [2024-07-21 16:35:01.044757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:42.899 [2024-07-21 16:35:01.044766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:42.899 [2024-07-21 16:35:01.044781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.899 [2024-07-21 16:35:01.052525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:42.899 [2024-07-21 16:35:01.052633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.899 [2024-07-21 16:35:01.052655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb370 with addr=10.0.0.3, port=4420 00:19:42.899 [2024-07-21 16:35:01.052666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb370 is same with the state(5) to be set 00:19:42.899 [2024-07-21 16:35:01.052683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb370 (9): Bad file descriptor 00:19:42.899 [2024-07-21 16:35:01.052698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:42.899 [2024-07-21 16:35:01.052708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:42.899 [2024-07-21 16:35:01.052718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:42.899 [2024-07-21 16:35:01.052733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.899 [2024-07-21 16:35:01.054640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:42.899 [2024-07-21 16:35:01.054759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.899 [2024-07-21 16:35:01.054781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1203480 with addr=10.0.0.2, port=4420 00:19:42.899 [2024-07-21 16:35:01.054792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1203480 is same with the state(5) to be set 00:19:42.899 [2024-07-21 16:35:01.054809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1203480 (9): Bad file descriptor 00:19:42.899 [2024-07-21 16:35:01.054840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:42.899 [2024-07-21 16:35:01.054852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:42.899 [2024-07-21 16:35:01.054861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:42.899 [2024-07-21 16:35:01.054876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.899 [2024-07-21 16:35:01.062600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:42.899 [2024-07-21 16:35:01.062705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.899 [2024-07-21 16:35:01.062743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb370 with addr=10.0.0.3, port=4420 00:19:42.899 [2024-07-21 16:35:01.062754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb370 is same with the state(5) to be set 00:19:42.899 [2024-07-21 16:35:01.062771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb370 (9): Bad file descriptor 00:19:42.899 [2024-07-21 16:35:01.062787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:42.899 [2024-07-21 16:35:01.062796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:42.899 [2024-07-21 16:35:01.062805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:42.899 [2024-07-21 16:35:01.062820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.899 [2024-07-21 16:35:01.064728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:42.899 [2024-07-21 16:35:01.064828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.899 [2024-07-21 16:35:01.064850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1203480 with addr=10.0.0.2, port=4420 00:19:42.899 [2024-07-21 16:35:01.064861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1203480 is same with the state(5) to be set 00:19:42.899 [2024-07-21 16:35:01.064877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1203480 (9): Bad file descriptor 00:19:42.899 [2024-07-21 16:35:01.064909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:42.899 [2024-07-21 16:35:01.064921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:42.899 [2024-07-21 16:35:01.064930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:42.899 [2024-07-21 16:35:01.064945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.899 [2024-07-21 16:35:01.072673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:42.899 [2024-07-21 16:35:01.072774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.899 [2024-07-21 16:35:01.072795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb370 with addr=10.0.0.3, port=4420 00:19:42.900 [2024-07-21 16:35:01.072807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb370 is same with the state(5) to be set 00:19:42.900 [2024-07-21 16:35:01.072823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb370 (9): Bad file descriptor 00:19:42.900 [2024-07-21 16:35:01.072838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:42.900 [2024-07-21 16:35:01.072847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:42.900 [2024-07-21 16:35:01.072856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:42.900 [2024-07-21 16:35:01.072870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.900 [2024-07-21 16:35:01.074798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:42.900 [2024-07-21 16:35:01.074898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.900 [2024-07-21 16:35:01.074920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1203480 with addr=10.0.0.2, port=4420 00:19:42.900 [2024-07-21 16:35:01.074931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1203480 is same with the state(5) to be set 00:19:42.900 [2024-07-21 16:35:01.074947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1203480 (9): Bad file descriptor 00:19:42.900 [2024-07-21 16:35:01.074979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:42.900 [2024-07-21 16:35:01.074991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:42.900 [2024-07-21 16:35:01.075006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:42.900 [2024-07-21 16:35:01.075020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.900 [2024-07-21 16:35:01.082744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:42.900 [2024-07-21 16:35:01.082846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.900 [2024-07-21 16:35:01.082868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cb370 with addr=10.0.0.3, port=4420 00:19:42.900 [2024-07-21 16:35:01.082878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cb370 is same with the state(5) to be set 00:19:42.900 [2024-07-21 16:35:01.082895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cb370 (9): Bad file descriptor 00:19:42.900 [2024-07-21 16:35:01.082909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:42.900 [2024-07-21 16:35:01.082919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:42.900 [2024-07-21 16:35:01.082928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:42.900 [2024-07-21 16:35:01.082943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.900 [2024-07-21 16:35:01.084867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:42.900 [2024-07-21 16:35:01.084965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:42.900 [2024-07-21 16:35:01.084986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1203480 with addr=10.0.0.2, port=4420 00:19:42.900 [2024-07-21 16:35:01.084997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1203480 is same with the state(5) to be set 00:19:42.900 [2024-07-21 16:35:01.085014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1203480 (9): Bad file descriptor 00:19:42.900 [2024-07-21 16:35:01.085046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:42.900 [2024-07-21 16:35:01.085059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:42.900 [2024-07-21 16:35:01.085067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:42.900 [2024-07-21 16:35:01.085082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:42.900 [2024-07-21 16:35:01.090043] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:19:42.900 [2024-07-21 16:35:01.090089] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:42.900 [2024-07-21 16:35:01.090109] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:42.900 [2024-07-21 16:35:01.091054] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:19:42.900 [2024-07-21 16:35:01.091098] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:19:42.900 [2024-07-21 16:35:01.091116] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:43.159 [2024-07-21 16:35:01.176110] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:43.159 [2024-07-21 16:35:01.177106] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:19:44.092 16:35:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:19:44.092 16:35:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:44.092 16:35:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:44.092 16:35:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:44.092 16:35:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.092 16:35:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.092 16:35:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:44.092 16:35:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.092 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:19:44.092 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:19:44.092 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:44.092 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:44.092 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.092 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:44.092 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.092 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:44.092 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.092 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:44.092 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:19:44.092 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:19:44.092 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:44.092 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.092 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.093 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:44.093 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:44.093 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.093 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:19:44.093 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:19:44.093 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:19:44.093 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.093 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:44.093 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.093 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:44.093 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:44.093 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.093 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:19:44.093 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:19:44.093 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:19:44.093 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.093 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.093 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:19:44.093 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.093 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:19:44.093 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:19:44.093 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:19:44.093 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:19:44.093 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.093 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.093 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.093 16:35:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:19:44.350 [2024-07-21 16:35:02.318866] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.283 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.283 [2024-07-21 16:35:03.483875] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:19:45.283 2024/07/21 16:35:03 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:19:45.541 request: 00:19:45.541 { 00:19:45.541 "method": "bdev_nvme_start_mdns_discovery", 00:19:45.541 "params": { 00:19:45.541 "name": "mdns", 00:19:45.541 "svcname": "_nvme-disc._http", 00:19:45.541 "hostnqn": "nqn.2021-12.io.spdk:test" 00:19:45.541 } 00:19:45.541 } 00:19:45.541 Got JSON-RPC error response 00:19:45.541 GoRPCClient: error on JSON-RPC call 00:19:45.541 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:45.541 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:19:45.541 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:45.541 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:45.541 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:45.541 16:35:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:19:46.106 [2024-07-21 16:35:04.072573] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:19:46.106 [2024-07-21 16:35:04.172569] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:19:46.106 [2024-07-21 16:35:04.272576] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:19:46.106 [2024-07-21 16:35:04.272598] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:19:46.106 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:46.106 cookie is 0 00:19:46.106 is_local: 1 00:19:46.106 our_own: 0 00:19:46.106 wide_area: 0 00:19:46.106 multicast: 1 00:19:46.106 cached: 1 00:19:46.364 [2024-07-21 16:35:04.372578] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:19:46.364 [2024-07-21 16:35:04.372602] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:19:46.364 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:46.364 cookie is 0 00:19:46.364 is_local: 1 00:19:46.364 our_own: 0 00:19:46.364 wide_area: 0 00:19:46.364 multicast: 1 00:19:46.364 cached: 1 00:19:46.364 [2024-07-21 16:35:04.372611] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:19:46.364 [2024-07-21 16:35:04.472578] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:19:46.364 [2024-07-21 16:35:04.472600] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:19:46.364 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:46.364 cookie is 0 00:19:46.364 is_local: 1 00:19:46.364 our_own: 0 00:19:46.364 wide_area: 0 00:19:46.364 multicast: 1 00:19:46.364 cached: 1 00:19:46.621 [2024-07-21 16:35:04.572578] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:19:46.621 [2024-07-21 16:35:04.572599] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:19:46.621 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:46.621 cookie is 0 00:19:46.621 is_local: 1 00:19:46.621 our_own: 0 00:19:46.621 wide_area: 0 00:19:46.621 multicast: 1 00:19:46.621 cached: 1 00:19:46.621 [2024-07-21 16:35:04.572609] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:19:47.187 [2024-07-21 16:35:05.282890] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:47.187 [2024-07-21 16:35:05.282913] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:47.187 [2024-07-21 16:35:05.282930] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:47.187 [2024-07-21 16:35:05.368987] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:19:47.444 [2024-07-21 16:35:05.428623] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:19:47.444 [2024-07-21 16:35:05.428652] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:19:47.444 [2024-07-21 16:35:05.482891] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:47.444 [2024-07-21 16:35:05.482913] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:47.444 [2024-07-21 16:35:05.482930] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:47.444 [2024-07-21 16:35:05.568995] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:19:47.444 [2024-07-21 16:35:05.628611] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:19:47.444 [2024-07-21 16:35:05.628639] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.747 [2024-07-21 16:35:08.686858] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:19:50.747 2024/07/21 16:35:08 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:19:50.747 request: 00:19:50.747 { 00:19:50.747 "method": "bdev_nvme_start_mdns_discovery", 00:19:50.747 "params": { 00:19:50.747 "name": "cdc", 00:19:50.747 "svcname": "_nvme-disc._tcp", 00:19:50.747 "hostnqn": "nqn.2021-12.io.spdk:test" 00:19:50.747 } 00:19:50.747 } 00:19:50.747 Got JSON-RPC error response 00:19:50.747 GoRPCClient: error on JSON-RPC call 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 94342 00:19:50.747 16:35:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 94342 00:19:50.747 [2024-07-21 16:35:08.875802] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:19:51.005 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 94371 00:19:51.005 Got SIGTERM, quitting. 00:19:51.005 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:19:51.005 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:51.005 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:19:51.005 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:19:51.005 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:19:51.005 avahi-daemon 0.8 exiting. 00:19:51.005 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:51.005 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:19:51.005 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:51.005 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:51.005 rmmod nvme_tcp 00:19:51.005 rmmod nvme_fabrics 00:19:51.005 rmmod nvme_keyring 00:19:51.005 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:51.005 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:19:51.005 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:19:51.005 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 94291 ']' 00:19:51.005 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 94291 00:19:51.005 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@948 -- # '[' -z 94291 ']' 00:19:51.005 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # kill -0 94291 00:19:51.005 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # uname 00:19:51.006 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:51.006 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94291 00:19:51.263 killing process with pid 94291 00:19:51.263 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:51.263 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:51.263 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94291' 00:19:51.263 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@967 -- # kill 94291 00:19:51.263 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # wait 94291 00:19:51.522 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:51.522 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:51.522 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:51.522 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:51.522 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:51.522 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.522 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:51.522 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.522 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:51.522 00:19:51.522 real 0m20.727s 00:19:51.522 user 0m40.598s 00:19:51.522 sys 0m2.068s 00:19:51.522 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:51.522 16:35:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:51.522 ************************************ 00:19:51.522 END TEST nvmf_mdns_discovery 00:19:51.522 ************************************ 00:19:51.522 16:35:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:51.522 16:35:09 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:19:51.522 16:35:09 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:19:51.522 16:35:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:51.522 16:35:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:51.522 16:35:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:51.522 ************************************ 00:19:51.522 START TEST nvmf_host_multipath 00:19:51.522 ************************************ 00:19:51.522 16:35:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:19:51.522 * Looking for test storage... 00:19:51.522 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:51.522 16:35:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:51.522 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:19:51.522 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:51.522 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:51.522 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:51.522 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:51.522 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:51.522 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:51.522 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:51.522 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:51.522 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:51.522 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:51.522 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:19:51.522 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:19:51.522 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:51.522 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:51.522 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:51.522 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:51.522 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:51.522 16:35:09 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:51.522 16:35:09 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:51.522 16:35:09 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:51.522 16:35:09 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.522 16:35:09 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:51.523 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:51.781 Cannot find device "nvmf_tgt_br" 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:51.781 Cannot find device "nvmf_tgt_br2" 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:51.781 Cannot find device "nvmf_tgt_br" 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:51.781 Cannot find device "nvmf_tgt_br2" 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:51.781 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:51.781 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:51.781 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:52.039 16:35:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:52.039 16:35:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:52.039 16:35:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:52.039 16:35:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:52.039 16:35:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:52.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:52.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:19:52.039 00:19:52.039 --- 10.0.0.2 ping statistics --- 00:19:52.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.039 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:52.039 16:35:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:52.039 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:52.039 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:19:52.039 00:19:52.039 --- 10.0.0.3 ping statistics --- 00:19:52.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.039 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:19:52.039 16:35:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:52.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:52.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:19:52.039 00:19:52.039 --- 10.0.0.1 ping statistics --- 00:19:52.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.039 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:19:52.039 16:35:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:52.039 16:35:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:19:52.039 16:35:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:52.039 16:35:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:52.039 16:35:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:52.039 16:35:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:52.039 16:35:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:52.039 16:35:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:52.039 16:35:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:52.039 16:35:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:19:52.039 16:35:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:52.039 16:35:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:52.039 16:35:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:52.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.039 16:35:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=94934 00:19:52.039 16:35:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 94934 00:19:52.039 16:35:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:52.039 16:35:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 94934 ']' 00:19:52.039 16:35:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.039 16:35:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:52.039 16:35:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.039 16:35:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:52.039 16:35:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:52.039 [2024-07-21 16:35:10.123477] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:19:52.039 [2024-07-21 16:35:10.123566] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.298 [2024-07-21 16:35:10.263567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:52.298 [2024-07-21 16:35:10.355003] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.298 [2024-07-21 16:35:10.355370] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.298 [2024-07-21 16:35:10.355529] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.298 [2024-07-21 16:35:10.355584] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.298 [2024-07-21 16:35:10.355708] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.298 [2024-07-21 16:35:10.355913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.298 [2024-07-21 16:35:10.355928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.232 16:35:11 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:53.232 16:35:11 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:19:53.232 16:35:11 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:53.232 16:35:11 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:53.232 16:35:11 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:53.232 16:35:11 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.232 16:35:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=94934 00:19:53.232 16:35:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:53.490 [2024-07-21 16:35:11.515901] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.490 16:35:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:53.749 Malloc0 00:19:53.749 16:35:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:54.008 16:35:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:54.266 16:35:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:54.524 [2024-07-21 16:35:12.583894] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.524 16:35:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:54.783 [2024-07-21 16:35:12.795979] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:54.783 16:35:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=95033 00:19:54.783 16:35:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:54.783 16:35:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:54.783 16:35:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 95033 /var/tmp/bdevperf.sock 00:19:54.783 16:35:12 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 95033 ']' 00:19:54.783 16:35:12 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.783 16:35:12 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:54.783 16:35:12 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.783 16:35:12 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:54.783 16:35:12 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:55.716 16:35:13 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:55.716 16:35:13 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:19:55.716 16:35:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:55.973 16:35:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:19:56.231 Nvme0n1 00:19:56.488 16:35:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:56.745 Nvme0n1 00:19:56.745 16:35:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:19:56.745 16:35:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:57.677 16:35:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:19:57.677 16:35:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:57.935 16:35:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:58.194 16:35:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:19:58.194 16:35:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95127 00:19:58.194 16:35:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:58.194 16:35:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94934 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:04.750 16:35:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:04.750 16:35:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:04.750 16:35:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:04.750 16:35:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:04.750 Attaching 4 probes... 00:20:04.750 @path[10.0.0.2, 4421]: 18018 00:20:04.750 @path[10.0.0.2, 4421]: 18330 00:20:04.750 @path[10.0.0.2, 4421]: 18771 00:20:04.750 @path[10.0.0.2, 4421]: 18297 00:20:04.750 @path[10.0.0.2, 4421]: 18414 00:20:04.750 16:35:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:04.750 16:35:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:04.750 16:35:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:04.750 16:35:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:04.750 16:35:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:04.750 16:35:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:04.750 16:35:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95127 00:20:04.750 16:35:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:04.750 16:35:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:20:04.750 16:35:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:04.750 16:35:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:20:05.007 16:35:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:20:05.007 16:35:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95253 00:20:05.007 16:35:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94934 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:05.007 16:35:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:11.566 16:35:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:11.566 16:35:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:20:11.566 16:35:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:20:11.566 16:35:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:11.566 Attaching 4 probes... 00:20:11.566 @path[10.0.0.2, 4420]: 18546 00:20:11.566 @path[10.0.0.2, 4420]: 19054 00:20:11.566 @path[10.0.0.2, 4420]: 18988 00:20:11.566 @path[10.0.0.2, 4420]: 18546 00:20:11.566 @path[10.0.0.2, 4420]: 18418 00:20:11.566 16:35:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:11.566 16:35:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:11.566 16:35:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:11.566 16:35:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:20:11.566 16:35:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:20:11.566 16:35:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:20:11.566 16:35:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95253 00:20:11.566 16:35:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:11.566 16:35:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:20:11.566 16:35:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:20:11.566 16:35:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:11.825 16:35:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:20:11.825 16:35:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95388 00:20:11.825 16:35:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94934 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:11.825 16:35:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:18.382 16:35:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:18.382 16:35:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:18.382 16:35:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:18.382 16:35:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:18.382 Attaching 4 probes... 00:20:18.382 @path[10.0.0.2, 4421]: 14635 00:20:18.382 @path[10.0.0.2, 4421]: 20244 00:20:18.382 @path[10.0.0.2, 4421]: 20162 00:20:18.382 @path[10.0.0.2, 4421]: 20358 00:20:18.382 @path[10.0.0.2, 4421]: 20237 00:20:18.382 16:35:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:18.382 16:35:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:18.382 16:35:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:18.382 16:35:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:18.382 16:35:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:18.382 16:35:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:18.382 16:35:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95388 00:20:18.382 16:35:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:18.382 16:35:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:20:18.382 16:35:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:20:18.382 16:35:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:20:18.640 16:35:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:20:18.640 16:35:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95520 00:20:18.640 16:35:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:18.640 16:35:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94934 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:25.231 16:35:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:25.231 16:35:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:20:25.231 16:35:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:20:25.231 16:35:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:25.231 Attaching 4 probes... 00:20:25.231 00:20:25.231 00:20:25.231 00:20:25.231 00:20:25.231 00:20:25.231 16:35:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:25.231 16:35:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:25.231 16:35:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:25.231 16:35:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:20:25.231 16:35:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:20:25.231 16:35:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:20:25.231 16:35:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95520 00:20:25.231 16:35:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:25.231 16:35:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:20:25.231 16:35:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:25.231 16:35:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:25.231 16:35:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:20:25.231 16:35:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95649 00:20:25.231 16:35:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94934 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:25.231 16:35:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:31.796 16:35:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:31.796 16:35:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:31.796 16:35:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:31.796 16:35:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:31.796 Attaching 4 probes... 00:20:31.796 @path[10.0.0.2, 4421]: 20783 00:20:31.796 @path[10.0.0.2, 4421]: 21064 00:20:31.796 @path[10.0.0.2, 4421]: 20945 00:20:31.796 @path[10.0.0.2, 4421]: 20950 00:20:31.796 @path[10.0.0.2, 4421]: 20814 00:20:31.796 16:35:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:31.796 16:35:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:31.796 16:35:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:31.796 16:35:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:31.796 16:35:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:31.796 16:35:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:31.796 16:35:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95649 00:20:31.796 16:35:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:31.796 16:35:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:32.054 [2024-07-21 16:35:50.051377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad460 is same with the state(5) to be set 00:20:32.054 [2024-07-21 16:35:50.051430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad460 is same with the state(5) to be set 00:20:32.054 [2024-07-21 16:35:50.051441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad460 is same with the state(5) to be set 00:20:32.054 [2024-07-21 16:35:50.051449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad460 is same with the state(5) to be set 00:20:32.054 [2024-07-21 16:35:50.051456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad460 is same with the state(5) to be set 00:20:32.054 [2024-07-21 16:35:50.051464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad460 is same with the state(5) to be set 00:20:32.054 [2024-07-21 16:35:50.051472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad460 is same with the state(5) to be set 00:20:32.054 [2024-07-21 16:35:50.051479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad460 is same with the state(5) to be set 00:20:32.054 [2024-07-21 16:35:50.051487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad460 is same with the state(5) to be set 00:20:32.054 [2024-07-21 16:35:50.051494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad460 is same with the state(5) to be set 00:20:32.054 [2024-07-21 16:35:50.051502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad460 is same with the state(5) to be set 00:20:32.054 [2024-07-21 16:35:50.051509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad460 is same with the state(5) to be set 00:20:32.054 [2024-07-21 16:35:50.051517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad460 is same with the state(5) to be set 00:20:32.054 [2024-07-21 16:35:50.051524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad460 is same with the state(5) to be set 00:20:32.054 [2024-07-21 16:35:50.051531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad460 is same with the state(5) to be set 00:20:32.054 [2024-07-21 16:35:50.051539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad460 is same with the state(5) to be set 00:20:32.054 [2024-07-21 16:35:50.051546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad460 is same with the state(5) to be set 00:20:32.054 [2024-07-21 16:35:50.051553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad460 is same with the state(5) to be set 00:20:32.054 [2024-07-21 16:35:50.051577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad460 is same with the state(5) to be set 00:20:32.054 [2024-07-21 16:35:50.051599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad460 is same with the state(5) to be set 00:20:32.054 [2024-07-21 16:35:50.051607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad460 is same with the state(5) to be set 00:20:32.054 [2024-07-21 16:35:50.051615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad460 is same with the state(5) to be set 00:20:32.054 [2024-07-21 16:35:50.051630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad460 is same with the state(5) to be set 00:20:32.054 16:35:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:20:32.988 16:35:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:20:32.988 16:35:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95785 00:20:32.988 16:35:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94934 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:32.988 16:35:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:39.544 16:35:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:39.544 16:35:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:20:39.544 16:35:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:20:39.544 16:35:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:39.544 Attaching 4 probes... 00:20:39.544 @path[10.0.0.2, 4420]: 18986 00:20:39.544 @path[10.0.0.2, 4420]: 19498 00:20:39.544 @path[10.0.0.2, 4420]: 19408 00:20:39.544 @path[10.0.0.2, 4420]: 19415 00:20:39.544 @path[10.0.0.2, 4420]: 19442 00:20:39.544 16:35:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:39.544 16:35:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:39.544 16:35:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:39.544 16:35:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:20:39.544 16:35:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:20:39.544 16:35:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:20:39.544 16:35:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95785 00:20:39.544 16:35:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:39.544 16:35:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:39.544 [2024-07-21 16:35:57.613066] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:39.544 16:35:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:39.803 16:35:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:20:46.412 16:36:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:20:46.412 16:36:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95978 00:20:46.412 16:36:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:46.412 16:36:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94934 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:51.677 16:36:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:51.677 16:36:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:51.934 16:36:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:51.934 16:36:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:51.934 Attaching 4 probes... 00:20:51.934 @path[10.0.0.2, 4421]: 19602 00:20:51.934 @path[10.0.0.2, 4421]: 19872 00:20:51.935 @path[10.0.0.2, 4421]: 19778 00:20:51.935 @path[10.0.0.2, 4421]: 19994 00:20:51.935 @path[10.0.0.2, 4421]: 20438 00:20:51.935 16:36:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:51.935 16:36:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:51.935 16:36:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:51.935 16:36:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:51.935 16:36:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:51.935 16:36:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:51.935 16:36:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95978 00:20:51.935 16:36:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:51.935 16:36:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 95033 00:20:51.935 16:36:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 95033 ']' 00:20:51.935 16:36:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 95033 00:20:51.935 16:36:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:20:52.192 16:36:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:52.192 16:36:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95033 00:20:52.192 16:36:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:52.192 killing process with pid 95033 00:20:52.192 16:36:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:52.192 16:36:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95033' 00:20:52.192 16:36:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 95033 00:20:52.192 16:36:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 95033 00:20:52.192 Connection closed with partial response: 00:20:52.192 00:20:52.192 00:20:52.459 16:36:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 95033 00:20:52.459 16:36:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:52.459 [2024-07-21 16:35:12.856061] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:20:52.459 [2024-07-21 16:35:12.856243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95033 ] 00:20:52.459 [2024-07-21 16:35:12.990402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.459 [2024-07-21 16:35:13.109244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.459 Running I/O for 90 seconds... 00:20:52.459 [2024-07-21 16:35:23.102046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.459 [2024-07-21 16:35:23.102113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:52.459 [2024-07-21 16:35:23.102185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.459 [2024-07-21 16:35:23.102206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:52.459 [2024-07-21 16:35:23.102225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.460 [2024-07-21 16:35:23.102240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.102259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.460 [2024-07-21 16:35:23.102290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.102345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.460 [2024-07-21 16:35:23.102362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.102412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.460 [2024-07-21 16:35:23.102430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.102452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.460 [2024-07-21 16:35:23.102467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.102488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.460 [2024-07-21 16:35:23.102503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.102832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.460 [2024-07-21 16:35:23.102855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.102878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:89152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.102893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.102911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.102950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.102971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:89168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.102984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.103003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:89176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.103016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.103035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.103048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.103067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:89192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.103079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.103097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.103110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.103129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:89208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.103141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.103160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:89216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.103172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.103190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:89224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.103203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.103221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:89232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.103234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.103253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.103265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.103319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:89248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.103333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.103372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:89256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.103388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.103431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:89264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.103447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.103468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.103483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.103504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.103519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.103541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:89288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.103556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.103577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.103591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.103643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.103677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.103711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:89312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.103724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.103742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:89320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.103755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.103774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:89328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.103787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.103806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:89336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.103819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.103837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.103850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.103868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:89352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.103881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.103906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.103920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.103939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.103952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.103970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:89376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.103983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.104002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:89384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.104015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.104033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:89392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.104046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.104064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:89400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.104077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.104095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:89408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.104108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.104127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:89416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.460 [2024-07-21 16:35:23.104140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:52.460 [2024-07-21 16:35:23.104158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:89424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.104171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.104189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:89432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.104202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.104222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:89440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.104235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.104253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:89448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.104266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.104320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:89456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.104343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.104377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:89464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.104395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.104416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:89472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.104431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.104453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.104467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.104488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:89488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.104503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.104524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:89496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.104539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.104560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.104575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.104596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:89512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.104615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.104667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:89520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.104694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.104728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.104741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.104760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.104773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.104792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:89544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.104805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.104840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:89552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.104861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.104898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.104912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.104933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:89568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.104948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.104969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:89576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.104983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.105003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:89584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.105017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.105037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:89592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.105051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.105071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:89600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.105085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.105105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:89608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.105119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.105149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:89616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.105163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.105182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:89624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.105196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.105215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:89632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.105229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.105248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:89640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.105262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.105300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.105322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.105344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:89656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.105360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.105394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:89664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.105413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.105436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:89672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.105451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.105472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.105487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.105508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:89688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.105523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.105545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:89696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.105560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.105605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:89704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.105636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:52.461 [2024-07-21 16:35:23.105671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:89712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.461 [2024-07-21 16:35:23.105685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.106459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:89720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.462 [2024-07-21 16:35:23.106489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.106516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.106533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.106554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.106569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.106591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.106606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.106654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.106693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.106742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.106771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.106791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.106805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.106824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.106837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.106856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.106870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.106889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.106912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.106931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.106944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.106963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.106977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.106996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.107009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.107029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.107043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.107068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.107098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.107116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.107129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.107155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.107169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.107188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.107201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.107220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.107233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.107252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.107265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.107284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.107312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.107332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.107345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.107377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.107394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.107414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.107428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.107447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.107461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.107481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.107494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.107514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.107527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.107546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.107560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.107580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.107603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.107623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.107652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.107676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.107690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.107709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.107723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.107741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.107755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.107773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.107787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.107805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.107818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.107837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.107850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.107869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.107882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.107901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.107914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.107933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.107946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.107964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.462 [2024-07-21 16:35:23.107978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:52.462 [2024-07-21 16:35:23.107997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.463 [2024-07-21 16:35:23.108017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:23.108037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.463 [2024-07-21 16:35:23.108051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:23.108070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.463 [2024-07-21 16:35:23.108083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:23.108102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.463 [2024-07-21 16:35:23.108115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:23.108133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.463 [2024-07-21 16:35:23.108147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:23.108166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.463 [2024-07-21 16:35:23.108179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:23.108204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.463 [2024-07-21 16:35:23.108218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:23.108237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.463 [2024-07-21 16:35:23.108251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:29.666634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.463 [2024-07-21 16:35:29.666693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:29.666779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.463 [2024-07-21 16:35:29.666800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:29.666836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.463 [2024-07-21 16:35:29.666851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:29.666869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.463 [2024-07-21 16:35:29.666882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:29.666900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.463 [2024-07-21 16:35:29.666913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:29.666953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.463 [2024-07-21 16:35:29.666967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:29.666986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.463 [2024-07-21 16:35:29.666998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:29.667016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.463 [2024-07-21 16:35:29.667029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:29.667046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:55176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.463 [2024-07-21 16:35:29.667059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:29.667077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:55184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.463 [2024-07-21 16:35:29.667090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:29.667108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:55192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.463 [2024-07-21 16:35:29.667120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:29.667138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:55200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.463 [2024-07-21 16:35:29.667151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:29.667169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:55208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.463 [2024-07-21 16:35:29.667181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:29.667199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:55216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.463 [2024-07-21 16:35:29.667211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:29.667245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.463 [2024-07-21 16:35:29.667257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:29.667275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.463 [2024-07-21 16:35:29.667287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:29.667306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.463 [2024-07-21 16:35:29.667330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:29.667378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.463 [2024-07-21 16:35:29.667395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:29.667414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.463 [2024-07-21 16:35:29.667428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:29.667462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.463 [2024-07-21 16:35:29.667476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:29.667495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.463 [2024-07-21 16:35:29.667509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:29.667529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.463 [2024-07-21 16:35:29.667543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:29.667562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.463 [2024-07-21 16:35:29.667575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:29.667594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.463 [2024-07-21 16:35:29.667608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:29.667627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.463 [2024-07-21 16:35:29.667655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:29.667674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.463 [2024-07-21 16:35:29.667687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:29.667706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.463 [2024-07-21 16:35:29.667719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.463 [2024-07-21 16:35:29.667739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.463 [2024-07-21 16:35:29.667752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.667771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.667784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.667803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.667823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.667843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.667857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.668182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.668205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.668231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.668246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.668284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.668299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.668352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.668371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.668395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.668410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.668433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.668447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.668469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.668484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.668506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.668521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.668543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.668558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.668580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.668595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.668617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.668642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.668696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.668710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.668731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.668745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.668766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.668779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.668800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.668814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.668850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.668864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.668886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.668900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.668921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.668935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.668959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.668974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.668995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.669009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.669031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.669045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.669066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:55784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.669080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.669101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.669115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.669143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.669158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.669179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.669208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.669229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:55816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.669243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.669264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.669277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.669298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.669311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.669346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.669362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.669383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.464 [2024-07-21 16:35:29.669397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:52.464 [2024-07-21 16:35:29.669419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:55856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.465 [2024-07-21 16:35:29.669433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.669454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.465 [2024-07-21 16:35:29.669467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.669488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.465 [2024-07-21 16:35:29.669502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.669522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.465 [2024-07-21 16:35:29.669535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.669557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:55240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.465 [2024-07-21 16:35:29.669571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.669600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.465 [2024-07-21 16:35:29.669614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.669635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:55256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.465 [2024-07-21 16:35:29.669649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.669671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:55264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.465 [2024-07-21 16:35:29.669684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.669705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:55272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.465 [2024-07-21 16:35:29.669719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.669740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:55280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.465 [2024-07-21 16:35:29.669753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.669774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:55288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.465 [2024-07-21 16:35:29.669788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.669808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:55296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.465 [2024-07-21 16:35:29.669822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.669843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:55304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.465 [2024-07-21 16:35:29.669857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.669877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:55312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.465 [2024-07-21 16:35:29.669891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.669912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.465 [2024-07-21 16:35:29.669925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.669947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.465 [2024-07-21 16:35:29.669960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.669982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:55336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.465 [2024-07-21 16:35:29.669996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.670017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:55344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.465 [2024-07-21 16:35:29.670037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.670607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:55352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.465 [2024-07-21 16:35:29.670631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.670657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:55360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.465 [2024-07-21 16:35:29.670673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.670711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:55368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.465 [2024-07-21 16:35:29.670741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.670764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.465 [2024-07-21 16:35:29.670777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.670801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:55384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.465 [2024-07-21 16:35:29.670830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.670852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:55392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.465 [2024-07-21 16:35:29.670866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.670888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:55400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.465 [2024-07-21 16:35:29.670909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.670932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:55408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.465 [2024-07-21 16:35:29.670946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.671206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:55416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.465 [2024-07-21 16:35:29.671234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.671283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.465 [2024-07-21 16:35:29.671302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.671329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:55888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.465 [2024-07-21 16:35:29.671344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.671370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.465 [2024-07-21 16:35:29.671396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.671425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.465 [2024-07-21 16:35:29.671439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.671464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.465 [2024-07-21 16:35:29.671478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.671505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.465 [2024-07-21 16:35:29.671519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.671545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.465 [2024-07-21 16:35:29.671558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.671600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:55936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.465 [2024-07-21 16:35:29.671615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.671641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.465 [2024-07-21 16:35:29.671656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.671682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:55952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.465 [2024-07-21 16:35:29.671697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.671724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.465 [2024-07-21 16:35:29.671739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.671765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.465 [2024-07-21 16:35:29.671779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.671806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.465 [2024-07-21 16:35:29.671820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.671846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.465 [2024-07-21 16:35:29.671861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.671888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.465 [2024-07-21 16:35:29.671917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:52.465 [2024-07-21 16:35:29.671949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:29.671964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:29.671990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:29.672004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:29.672030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:29.672043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:29.672070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:29.672083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:29.672109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:29.672123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:29.672148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:29.672162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:29.672189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:29.672203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:29.672228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:29.672242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:29.672284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:56064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:29.672299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.648211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:36.648318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.648381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:36.648404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.648427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:36.648442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.648490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:36.648507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.648528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:36.648543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.648564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:36.648581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.648618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:36.648632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.648682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:36.648710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.649430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:36.649459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.649488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:36.649505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.649527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:36.649543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.649565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:36.649579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.649617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:36.649631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.649667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:36.649695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.649724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:36.649737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.649756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:36.649782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.649804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:36.649818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.649838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:36.649852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.649872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:36.649887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.649907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:36.649922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.649942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:36.649955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.649975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:36.649989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.650009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:36.650023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.650042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:36.650056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.650075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.466 [2024-07-21 16:35:36.650089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.650109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.466 [2024-07-21 16:35:36.650122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.650142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.466 [2024-07-21 16:35:36.650161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.650181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.466 [2024-07-21 16:35:36.650201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.650222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.466 [2024-07-21 16:35:36.650235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.650255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.466 [2024-07-21 16:35:36.650285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.650324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.466 [2024-07-21 16:35:36.650353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.650388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.466 [2024-07-21 16:35:36.650406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.650430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.466 [2024-07-21 16:35:36.650446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:52.466 [2024-07-21 16:35:36.650468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.650484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.650506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.650522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.650544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.650560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.650582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.650597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.650619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.650634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.650657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.650673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.650715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.650749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.650779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.650794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.650815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.650835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.650856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.650870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.650910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.650940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.650961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.650975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.650996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.651010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.651031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.651062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.651084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.651098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.651121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.651136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.651157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.651172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.651194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.651209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.651246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.651260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.651307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.651323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.651345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.651360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.651393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.651411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.651434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.651448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.651470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.651499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.651520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.651534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.651570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.651584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.651605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.651619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.651653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.651667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.651686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.651700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.651720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.651734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.651753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.651767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.651788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.467 [2024-07-21 16:35:36.651808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.651829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.467 [2024-07-21 16:35:36.651843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.651863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.467 [2024-07-21 16:35:36.651877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.651897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.467 [2024-07-21 16:35:36.651911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.651930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.467 [2024-07-21 16:35:36.651944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.651964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.467 [2024-07-21 16:35:36.651977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.651997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.467 [2024-07-21 16:35:36.652010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.652031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.467 [2024-07-21 16:35:36.652044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.652227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.467 [2024-07-21 16:35:36.652250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.652310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.467 [2024-07-21 16:35:36.652326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:52.467 [2024-07-21 16:35:36.652368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.467 [2024-07-21 16:35:36.652386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.652412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.652427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.652452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.652477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.652504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.652520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.652546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.652561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.652590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.652621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.652646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.652661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.652685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.652715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.652738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.652752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.652794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.652809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.652836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.652851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.652878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.652894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.652921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.652936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.652963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.652979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.653006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.653021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.653055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.653072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.653099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.653124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.653196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.653210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.653251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.653265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.653325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.653341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.653368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.653384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.653425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.653444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.653472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.653488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.653515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.653530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.653557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.653572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.653600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.653631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.653657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.653671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.653705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.653736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.653777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.653791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.653815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.653830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.653854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.468 [2024-07-21 16:35:36.653868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.653892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.468 [2024-07-21 16:35:36.653914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.653940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.468 [2024-07-21 16:35:36.653955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.653979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.468 [2024-07-21 16:35:36.653993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.654018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.468 [2024-07-21 16:35:36.654032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.654057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.468 [2024-07-21 16:35:36.654072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.654097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.468 [2024-07-21 16:35:36.654110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:52.468 [2024-07-21 16:35:36.654135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.468 [2024-07-21 16:35:36.654149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:36.654174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.469 [2024-07-21 16:35:36.654188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:36.654213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.469 [2024-07-21 16:35:36.654234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:36.654260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.469 [2024-07-21 16:35:36.654291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:36.654334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.469 [2024-07-21 16:35:36.654352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:36.654406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.469 [2024-07-21 16:35:36.654424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:36.654453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.469 [2024-07-21 16:35:36.654468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:36.654496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.469 [2024-07-21 16:35:36.654511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:36.654539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.469 [2024-07-21 16:35:36.654554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:36.654582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.469 [2024-07-21 16:35:36.654597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.051970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.469 [2024-07-21 16:35:50.052031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.052087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.469 [2024-07-21 16:35:50.052106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.052143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.469 [2024-07-21 16:35:50.052173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.052191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.469 [2024-07-21 16:35:50.052204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.052222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.469 [2024-07-21 16:35:50.052258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.052294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.469 [2024-07-21 16:35:50.052307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.052325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.469 [2024-07-21 16:35:50.052354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.052392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.469 [2024-07-21 16:35:50.052405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.052648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.469 [2024-07-21 16:35:50.052675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.052691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.469 [2024-07-21 16:35:50.052704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.052716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.469 [2024-07-21 16:35:50.052728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.052740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.469 [2024-07-21 16:35:50.052751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.052763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.469 [2024-07-21 16:35:50.052774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.052786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.469 [2024-07-21 16:35:50.052797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.052809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.469 [2024-07-21 16:35:50.052820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.052833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.469 [2024-07-21 16:35:50.052844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.052856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.469 [2024-07-21 16:35:50.052869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.052894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.469 [2024-07-21 16:35:50.052907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.052919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.469 [2024-07-21 16:35:50.052931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.052944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.469 [2024-07-21 16:35:50.052955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.052968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.469 [2024-07-21 16:35:50.052979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.052992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.469 [2024-07-21 16:35:50.053003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.053016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.469 [2024-07-21 16:35:50.053028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.053041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.469 [2024-07-21 16:35:50.053052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.053065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.469 [2024-07-21 16:35:50.053076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.053089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.469 [2024-07-21 16:35:50.053100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.053113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.469 [2024-07-21 16:35:50.053124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.053137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.469 [2024-07-21 16:35:50.053148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.053161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.469 [2024-07-21 16:35:50.053173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.053185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.469 [2024-07-21 16:35:50.053202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.053216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.469 [2024-07-21 16:35:50.053228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.053241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.469 [2024-07-21 16:35:50.053252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.469 [2024-07-21 16:35:50.053265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.469 [2024-07-21 16:35:50.053294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.053321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.470 [2024-07-21 16:35:50.053337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.053351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.053362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.053376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.053387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.053401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.053412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.053425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.053437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.053450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.053461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.053475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.053486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.053499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.053510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.053524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.053535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.053549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.053568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.053582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.053594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.053607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.053619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.053632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.053660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.053673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.053685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.053699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.053711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.053724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.053736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.053751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.053762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.053776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.053788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.053801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.470 [2024-07-21 16:35:50.053813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.053826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.470 [2024-07-21 16:35:50.053837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.053850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.470 [2024-07-21 16:35:50.053861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.053874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.470 [2024-07-21 16:35:50.053886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.053903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.470 [2024-07-21 16:35:50.053916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.053929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.470 [2024-07-21 16:35:50.053943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.053957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.470 [2024-07-21 16:35:50.053969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.053982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.470 [2024-07-21 16:35:50.053994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.054008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.054020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.054034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.054045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.054058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.054070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.054082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.054094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.054107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.054120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.054133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.054144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.054157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.054168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.054182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.054193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.054206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.054223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.054236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.054248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.054261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.054289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.054326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.054339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.054352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.054364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.054377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.054399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.054414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.054426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.054439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.470 [2024-07-21 16:35:50.054451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.470 [2024-07-21 16:35:50.054465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.471 [2024-07-21 16:35:50.054485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.054499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.471 [2024-07-21 16:35:50.054511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.054524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.471 [2024-07-21 16:35:50.054536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.054549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.471 [2024-07-21 16:35:50.054561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.054574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:52.471 [2024-07-21 16:35:50.054586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.054606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.054618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.054632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.054644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.054658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.054686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.054699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.054717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.054730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.054742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.054755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.054767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.054779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.054791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.054804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.054815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.054828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.054840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.054854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.054865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.054878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.054890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.054909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.054927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.054940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.054952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.054970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.054982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.054996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.055007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.055020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.055032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.055045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.055057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.055070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.055081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.055094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.055106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.055119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.055130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.055143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.055154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.055167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.055180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.055193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.055205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.055218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.055229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.055242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.055254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.055267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.055319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.055334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.055347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.055361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.055386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.055400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.055412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.055425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.055437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.055451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.055463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.055476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.055488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.055502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.055514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.055527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.055539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.055552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.055564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.055577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.055590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.055603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.055615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.055628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.471 [2024-07-21 16:35:50.055640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.471 [2024-07-21 16:35:50.055697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.472 [2024-07-21 16:35:50.055709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.472 [2024-07-21 16:35:50.055722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.472 [2024-07-21 16:35:50.055733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.472 [2024-07-21 16:35:50.055747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.472 [2024-07-21 16:35:50.055759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.472 [2024-07-21 16:35:50.055772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.472 [2024-07-21 16:35:50.055783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.472 [2024-07-21 16:35:50.055797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.472 [2024-07-21 16:35:50.055809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.472 [2024-07-21 16:35:50.055822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.472 [2024-07-21 16:35:50.055846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.472 [2024-07-21 16:35:50.055859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.472 [2024-07-21 16:35:50.055871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.472 [2024-07-21 16:35:50.055884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.472 [2024-07-21 16:35:50.055896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.472 [2024-07-21 16:35:50.055909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.472 [2024-07-21 16:35:50.055920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.472 [2024-07-21 16:35:50.055933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.472 [2024-07-21 16:35:50.055944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.472 [2024-07-21 16:35:50.056175] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14e9240 was disconnected and freed. reset controller. 00:20:52.472 [2024-07-21 16:35:50.057621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:52.472 [2024-07-21 16:35:50.057723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.472 [2024-07-21 16:35:50.057744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.472 [2024-07-21 16:35:50.057773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13cad00 (9): Bad file descriptor 00:20:52.472 [2024-07-21 16:35:50.058084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.472 [2024-07-21 16:35:50.058113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cad00 with addr=10.0.0.2, port=4421 00:20:52.472 [2024-07-21 16:35:50.058128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cad00 is same with the state(5) to be set 00:20:52.472 [2024-07-21 16:35:50.058150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13cad00 (9): Bad file descriptor 00:20:52.472 [2024-07-21 16:35:50.058171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:52.472 [2024-07-21 16:35:50.058183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:52.472 [2024-07-21 16:35:50.058195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:52.472 [2024-07-21 16:35:50.058217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:52.472 [2024-07-21 16:35:50.058230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:52.472 [2024-07-21 16:36:00.123350] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:52.472 Received shutdown signal, test time was about 55.298965 seconds 00:20:52.472 00:20:52.472 Latency(us) 00:20:52.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.472 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:52.472 Verification LBA range: start 0x0 length 0x4000 00:20:52.472 Nvme0n1 : 55.30 8391.70 32.78 0.00 0.00 15224.55 670.25 7015926.69 00:20:52.472 =================================================================================================================== 00:20:52.472 Total : 8391.70 32.78 0.00 0.00 15224.55 670.25 7015926.69 00:20:52.472 16:36:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:52.730 16:36:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:20:52.730 16:36:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:52.730 16:36:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:20:52.730 16:36:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:52.730 16:36:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:20:52.731 16:36:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:52.731 16:36:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:20:52.731 16:36:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:52.731 16:36:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:52.731 rmmod nvme_tcp 00:20:52.731 rmmod nvme_fabrics 00:20:52.731 rmmod nvme_keyring 00:20:52.731 16:36:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:52.731 16:36:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:20:52.731 16:36:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:20:52.731 16:36:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 94934 ']' 00:20:52.731 16:36:10 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 94934 00:20:52.731 16:36:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 94934 ']' 00:20:52.731 16:36:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 94934 00:20:52.731 16:36:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:20:52.731 16:36:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:52.731 16:36:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94934 00:20:52.731 16:36:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:52.731 16:36:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:52.731 killing process with pid 94934 00:20:52.731 16:36:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94934' 00:20:52.731 16:36:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 94934 00:20:52.731 16:36:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 94934 00:20:52.989 16:36:11 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:52.989 16:36:11 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:52.989 16:36:11 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:52.989 16:36:11 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:52.989 16:36:11 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:53.247 16:36:11 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.247 16:36:11 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:53.247 16:36:11 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.247 16:36:11 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:53.247 00:20:53.247 real 1m1.643s 00:20:53.247 user 2m53.905s 00:20:53.247 sys 0m13.994s 00:20:53.247 16:36:11 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:53.247 16:36:11 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:53.247 ************************************ 00:20:53.247 END TEST nvmf_host_multipath 00:20:53.247 ************************************ 00:20:53.247 16:36:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:53.247 16:36:11 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:20:53.247 16:36:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:53.247 16:36:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:53.247 16:36:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:53.247 ************************************ 00:20:53.247 START TEST nvmf_timeout 00:20:53.247 ************************************ 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:20:53.247 * Looking for test storage... 00:20:53.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:53.247 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:53.248 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:53.248 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:53.248 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:53.248 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:53.248 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:53.248 Cannot find device "nvmf_tgt_br" 00:20:53.248 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:20:53.248 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:53.248 Cannot find device "nvmf_tgt_br2" 00:20:53.248 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:20:53.248 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:53.248 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:53.248 Cannot find device "nvmf_tgt_br" 00:20:53.248 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:20:53.248 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:53.505 Cannot find device "nvmf_tgt_br2" 00:20:53.505 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:20:53.505 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:53.505 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:53.505 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:53.505 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:53.505 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:20:53.505 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:53.506 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:53.506 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:20:53.506 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:53.506 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:53.506 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:53.506 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:53.506 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:53.506 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:53.506 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:53.506 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:53.506 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:53.506 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:53.506 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:53.506 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:53.506 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:53.506 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:53.506 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:53.506 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:53.506 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:53.506 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:53.506 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:53.506 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:53.506 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:53.764 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:53.764 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:53.764 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:53.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:53.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:20:53.764 00:20:53.764 --- 10.0.0.2 ping statistics --- 00:20:53.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.764 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:20:53.764 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:53.764 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:53.764 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:20:53.764 00:20:53.764 --- 10.0.0.3 ping statistics --- 00:20:53.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.764 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:20:53.764 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:53.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:53.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:20:53.764 00:20:53.764 --- 10.0.0.1 ping statistics --- 00:20:53.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.764 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:20:53.764 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:53.764 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:20:53.764 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:53.764 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:53.764 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:53.764 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:53.764 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:53.764 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:53.764 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:53.764 16:36:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:20:53.764 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:53.764 16:36:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:53.764 16:36:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:53.764 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=96303 00:20:53.764 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:53.764 16:36:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 96303 00:20:53.764 16:36:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96303 ']' 00:20:53.764 16:36:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.764 16:36:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:53.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.764 16:36:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.764 16:36:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:53.764 16:36:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:53.764 [2024-07-21 16:36:11.809069] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:20:53.764 [2024-07-21 16:36:11.809143] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.764 [2024-07-21 16:36:11.942104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:54.022 [2024-07-21 16:36:12.061938] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.022 [2024-07-21 16:36:12.061998] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.022 [2024-07-21 16:36:12.062020] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.022 [2024-07-21 16:36:12.062037] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.022 [2024-07-21 16:36:12.062051] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.022 [2024-07-21 16:36:12.062231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.022 [2024-07-21 16:36:12.062257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.956 16:36:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:54.956 16:36:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:20:54.956 16:36:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:54.956 16:36:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:54.956 16:36:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:54.956 16:36:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.956 16:36:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:54.956 16:36:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:54.956 [2024-07-21 16:36:13.116502] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.956 16:36:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:55.213 Malloc0 00:20:55.213 16:36:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:55.471 16:36:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:55.729 16:36:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:55.987 [2024-07-21 16:36:13.991419] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:55.987 16:36:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96394 00:20:55.987 16:36:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:20:55.987 16:36:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96394 /var/tmp/bdevperf.sock 00:20:55.987 16:36:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96394 ']' 00:20:55.987 16:36:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:55.987 16:36:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:55.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:55.987 16:36:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:55.987 16:36:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:55.987 16:36:14 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:55.987 [2024-07-21 16:36:14.067789] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:20:55.987 [2024-07-21 16:36:14.067895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96394 ] 00:20:56.245 [2024-07-21 16:36:14.207235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.245 [2024-07-21 16:36:14.332176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.821 16:36:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:56.821 16:36:15 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:20:56.821 16:36:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:57.092 16:36:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:57.350 NVMe0n1 00:20:57.351 16:36:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96442 00:20:57.351 16:36:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:57.351 16:36:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:20:57.610 Running I/O for 10 seconds... 00:20:58.545 16:36:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:58.805 [2024-07-21 16:36:16.805330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacb870 is same with the state(5) to be set 00:20:58.805 [2024-07-21 16:36:16.805410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacb870 is same with the state(5) to be set 00:20:58.805 [2024-07-21 16:36:16.805420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacb870 is same with the state(5) to be set 00:20:58.805 [2024-07-21 16:36:16.805428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacb870 is same with the state(5) to be set 00:20:58.805 [2024-07-21 16:36:16.805436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacb870 is same with the state(5) to be set 00:20:58.805 [2024-07-21 16:36:16.805444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacb870 is same with the state(5) to be set 00:20:58.805 [2024-07-21 16:36:16.805452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacb870 is same with the state(5) to be set 00:20:58.805 [2024-07-21 16:36:16.805842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.805 [2024-07-21 16:36:16.805882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.805909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.805 [2024-07-21 16:36:16.805920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.805932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.805941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.805952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.805961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.805972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.805981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.805992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.806000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.806011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.806020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.806031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.806040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.806051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.806060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.806070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.806079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.806089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.806098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.806108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.806116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.806126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.806135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.806145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.806155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.806166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.806175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.806186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.806195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.806206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.806215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.806225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.806234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.806244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.806254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.806286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.806297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.806307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.806316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.806327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.806336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.806346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.806357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.806369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.806379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.806390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.806399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.806422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.806432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.806442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.806452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.806462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.806471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.806481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.806490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.806500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.806509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.806520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.806529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.806540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.805 [2024-07-21 16:36:16.806549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.805 [2024-07-21 16:36:16.806559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.806568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.806578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.806586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.806597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.806605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.806616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.806625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.806637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.806647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.806657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.806666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.806677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.806687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.806697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.806707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.806717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.806726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.806737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.806746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.806757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.806767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.806779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.806788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.806799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.806808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.806818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.806833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.806844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.806853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.806864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.806873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.806883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.806891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.806902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.806912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.806923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.806932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.806942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.806951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.806962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.806971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.806983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.806991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.807002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.807011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.807021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.807029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.807040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.807049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.807059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.807069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.807080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.807090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.807101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.807110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.807121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.807131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.807141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.807150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.807161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.807171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.807182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.807190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.807201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.807210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.807222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.806 [2024-07-21 16:36:16.807232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.807244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.806 [2024-07-21 16:36:16.807253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.807287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.806 [2024-07-21 16:36:16.807298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.807310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.806 [2024-07-21 16:36:16.807320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.807331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.806 [2024-07-21 16:36:16.807340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.807351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.806 [2024-07-21 16:36:16.807360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.807370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.806 [2024-07-21 16:36:16.807378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.807388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.806 [2024-07-21 16:36:16.807397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.807407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.806 [2024-07-21 16:36:16.807416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.807427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.806 [2024-07-21 16:36:16.807436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.807446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.806 [2024-07-21 16:36:16.807455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.807466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.806 [2024-07-21 16:36:16.807475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.806 [2024-07-21 16:36:16.807485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.806 [2024-07-21 16:36:16.807494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.807504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.807513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.807531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.807541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.807552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.807561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.807573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.807583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.807593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.807602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.807612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.807621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.807631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.807640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.807650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.807659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.807669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.807677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.807688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.807697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.807707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.807723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.807734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.807745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.807756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.807765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.807775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.807784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.807794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.807803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.807814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.807823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.807833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.807842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.807855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.807865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.807875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.807884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.807895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.807904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.807914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.807923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.807933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.807942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.807952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.807961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.807971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.807980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.807991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.808000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.808010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.808019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.808030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.808046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.808056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.808065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.808076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.808085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.808095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.808103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.808113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.808122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.808132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.808141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.808151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.808160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.808174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.808184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.808194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.808203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.808214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.807 [2024-07-21 16:36:16.808223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.808234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.807 [2024-07-21 16:36:16.808244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.808254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.807 [2024-07-21 16:36:16.808274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.808286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.807 [2024-07-21 16:36:16.808296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.808307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.807 [2024-07-21 16:36:16.808316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.808327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.807 [2024-07-21 16:36:16.808335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.808346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.807 [2024-07-21 16:36:16.808355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.808366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.808381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.808391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.808400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.808411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.807 [2024-07-21 16:36:16.808420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.807 [2024-07-21 16:36:16.808431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.808 [2024-07-21 16:36:16.808439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.808 [2024-07-21 16:36:16.808450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.808 [2024-07-21 16:36:16.808459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.808 [2024-07-21 16:36:16.808469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.808 [2024-07-21 16:36:16.808479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.808 [2024-07-21 16:36:16.808490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:58.808 [2024-07-21 16:36:16.808499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.808 [2024-07-21 16:36:16.808513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13308d0 is same with the state(5) to be set 00:20:58.808 [2024-07-21 16:36:16.808526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:58.808 [2024-07-21 16:36:16.808533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:58.808 [2024-07-21 16:36:16.808541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96528 len:8 PRP1 0x0 PRP2 0x0 00:20:58.808 [2024-07-21 16:36:16.808550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.808 [2024-07-21 16:36:16.808612] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13308d0 was disconnected and freed. reset controller. 00:20:58.808 [2024-07-21 16:36:16.808850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:58.808 [2024-07-21 16:36:16.808944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c3240 (9): Bad file descriptor 00:20:58.808 [2024-07-21 16:36:16.809071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.808 [2024-07-21 16:36:16.809104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c3240 with addr=10.0.0.2, port=4420 00:20:58.808 [2024-07-21 16:36:16.809117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3240 is same with the state(5) to be set 00:20:58.808 [2024-07-21 16:36:16.809139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c3240 (9): Bad file descriptor 00:20:58.808 [2024-07-21 16:36:16.809156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:58.808 [2024-07-21 16:36:16.809166] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:58.808 [2024-07-21 16:36:16.809177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:58.808 [2024-07-21 16:36:16.809197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:58.808 [2024-07-21 16:36:16.809209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:58.808 16:36:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:21:00.707 [2024-07-21 16:36:18.809340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:00.707 [2024-07-21 16:36:18.809426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c3240 with addr=10.0.0.2, port=4420 00:21:00.707 [2024-07-21 16:36:18.809442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3240 is same with the state(5) to be set 00:21:00.707 [2024-07-21 16:36:18.809472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c3240 (9): Bad file descriptor 00:21:00.707 [2024-07-21 16:36:18.809503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:00.707 [2024-07-21 16:36:18.809515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:00.707 [2024-07-21 16:36:18.809538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:00.707 [2024-07-21 16:36:18.809562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:00.707 [2024-07-21 16:36:18.809574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:00.707 16:36:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:21:00.707 16:36:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:00.707 16:36:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:21:00.966 16:36:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:21:00.966 16:36:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:21:00.966 16:36:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:21:00.966 16:36:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:21:01.224 16:36:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:21:01.224 16:36:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:21:03.123 [2024-07-21 16:36:20.809651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:03.123 [2024-07-21 16:36:20.809717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c3240 with addr=10.0.0.2, port=4420 00:21:03.123 [2024-07-21 16:36:20.809740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c3240 is same with the state(5) to be set 00:21:03.123 [2024-07-21 16:36:20.809759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c3240 (9): Bad file descriptor 00:21:03.123 [2024-07-21 16:36:20.809779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.123 [2024-07-21 16:36:20.809789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:03.123 [2024-07-21 16:36:20.809799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.123 [2024-07-21 16:36:20.809818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:03.123 [2024-07-21 16:36:20.809829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:05.023 [2024-07-21 16:36:22.809851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:05.023 [2024-07-21 16:36:22.809904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:05.023 [2024-07-21 16:36:22.809927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:05.023 [2024-07-21 16:36:22.809936] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:21:05.023 [2024-07-21 16:36:22.809957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:05.957 00:21:05.957 Latency(us) 00:21:05.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.957 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:05.957 Verification LBA range: start 0x0 length 0x4000 00:21:05.957 NVMe0n1 : 8.15 1465.10 5.72 15.71 0.00 86312.17 1921.40 7015926.69 00:21:05.957 =================================================================================================================== 00:21:05.957 Total : 1465.10 5.72 15.71 0.00 86312.17 1921.40 7015926.69 00:21:05.957 0 00:21:06.215 16:36:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:21:06.215 16:36:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:06.215 16:36:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:21:06.472 16:36:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:21:06.472 16:36:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:21:06.472 16:36:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:21:06.472 16:36:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:21:06.730 16:36:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:21:06.730 16:36:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 96442 00:21:06.730 16:36:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96394 00:21:06.730 16:36:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96394 ']' 00:21:06.730 16:36:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96394 00:21:06.730 16:36:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:21:06.730 16:36:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:06.730 16:36:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96394 00:21:06.730 16:36:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:06.730 16:36:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:06.730 killing process with pid 96394 00:21:06.730 16:36:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96394' 00:21:06.730 Received shutdown signal, test time was about 9.269577 seconds 00:21:06.730 00:21:06.730 Latency(us) 00:21:06.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.730 =================================================================================================================== 00:21:06.730 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:06.730 16:36:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96394 00:21:06.730 16:36:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96394 00:21:07.297 16:36:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:07.297 [2024-07-21 16:36:25.412484] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.297 16:36:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=96599 00:21:07.297 16:36:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:21:07.297 16:36:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 96599 /var/tmp/bdevperf.sock 00:21:07.297 16:36:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96599 ']' 00:21:07.297 16:36:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:07.297 16:36:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:07.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:07.297 16:36:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:07.297 16:36:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:07.297 16:36:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:07.297 [2024-07-21 16:36:25.471523] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:21:07.297 [2024-07-21 16:36:25.471613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96599 ] 00:21:07.555 [2024-07-21 16:36:25.596995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.555 [2024-07-21 16:36:25.698211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.490 16:36:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:08.490 16:36:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:21:08.490 16:36:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:08.490 16:36:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:21:08.749 NVMe0n1 00:21:08.749 16:36:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=96651 00:21:08.749 16:36:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:08.749 16:36:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:21:09.007 Running I/O for 10 seconds... 00:21:09.961 16:36:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:09.961 [2024-07-21 16:36:28.164216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.961 [2024-07-21 16:36:28.164757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.962 [2024-07-21 16:36:28.164765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.962 [2024-07-21 16:36:28.164775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.962 [2024-07-21 16:36:28.164783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.962 [2024-07-21 16:36:28.164791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.962 [2024-07-21 16:36:28.164799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.962 [2024-07-21 16:36:28.164807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.962 [2024-07-21 16:36:28.164815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.962 [2024-07-21 16:36:28.164823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.962 [2024-07-21 16:36:28.164831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.962 [2024-07-21 16:36:28.164839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.962 [2024-07-21 16:36:28.164847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.962 [2024-07-21 16:36:28.164855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.962 [2024-07-21 16:36:28.164863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.962 [2024-07-21 16:36:28.164871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.962 [2024-07-21 16:36:28.164879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.962 [2024-07-21 16:36:28.164887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.962 [2024-07-21 16:36:28.164895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.962 [2024-07-21 16:36:28.164902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.962 [2024-07-21 16:36:28.164910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.962 [2024-07-21 16:36:28.164918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.962 [2024-07-21 16:36:28.164926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.962 [2024-07-21 16:36:28.164933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.962 [2024-07-21 16:36:28.164941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9cc0 is same with the state(5) to be set 00:21:09.962 [2024-07-21 16:36:28.165409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.165466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.165486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.165498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.165512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.165521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.165532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.165541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.165552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.165563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.165574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:86088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.165583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.165593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:86096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.165602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.165613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.165622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.165633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.165642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.165652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:86120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.165661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.165672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.165680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.165691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:86136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.165700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.165710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:86144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.165720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.165730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:86152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.165739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.165750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.165758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.165769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:86168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.165778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.165788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.165800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.165810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.165820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.165831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.165840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.165851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.165860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.165871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.165880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.165891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.165900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.165911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.165920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.165931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.165940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.165951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.165960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.165972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.165981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.165992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.166001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.166011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.166020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.166031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:86272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.166039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.166050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.166059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.166070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:86288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.962 [2024-07-21 16:36:28.166079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.962 [2024-07-21 16:36:28.166092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.963 [2024-07-21 16:36:28.166105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.963 [2024-07-21 16:36:28.166126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.963 [2024-07-21 16:36:28.166146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:86320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.963 [2024-07-21 16:36:28.166166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.963 [2024-07-21 16:36:28.166186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:86336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.963 [2024-07-21 16:36:28.166207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:86344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.963 [2024-07-21 16:36:28.166227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:86352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.963 [2024-07-21 16:36:28.166247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.963 [2024-07-21 16:36:28.166284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:86368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.963 [2024-07-21 16:36:28.166305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.963 [2024-07-21 16:36:28.166325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.963 [2024-07-21 16:36:28.166345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:86392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.963 [2024-07-21 16:36:28.166365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.963 [2024-07-21 16:36:28.166384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.963 [2024-07-21 16:36:28.166405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.963 [2024-07-21 16:36:28.166436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.963 [2024-07-21 16:36:28.166456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.963 [2024-07-21 16:36:28.166477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.963 [2024-07-21 16:36:28.166498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.963 [2024-07-21 16:36:28.166517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:86456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.963 [2024-07-21 16:36:28.166537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.963 [2024-07-21 16:36:28.166556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.963 [2024-07-21 16:36:28.166575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.963 [2024-07-21 16:36:28.166594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:09.963 [2024-07-21 16:36:28.166614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:09.963 [2024-07-21 16:36:28.166636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:09.963 [2024-07-21 16:36:28.166660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:09.963 [2024-07-21 16:36:28.166680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:09.963 [2024-07-21 16:36:28.166699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:09.963 [2024-07-21 16:36:28.166732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:09.963 [2024-07-21 16:36:28.166751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:09.963 [2024-07-21 16:36:28.166771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:09.963 [2024-07-21 16:36:28.166792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:86696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:09.963 [2024-07-21 16:36:28.166812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:09.963 [2024-07-21 16:36:28.166831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:09.963 [2024-07-21 16:36:28.166851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:09.963 [2024-07-21 16:36:28.166871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:09.963 [2024-07-21 16:36:28.166890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.963 [2024-07-21 16:36:28.166900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:09.963 [2024-07-21 16:36:28.166909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.224 [2024-07-21 16:36:28.166920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.224 [2024-07-21 16:36:28.166928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.224 [2024-07-21 16:36:28.166938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.224 [2024-07-21 16:36:28.166947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.224 [2024-07-21 16:36:28.166957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.224 [2024-07-21 16:36:28.166966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.224 [2024-07-21 16:36:28.166977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.224 [2024-07-21 16:36:28.166986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.224 [2024-07-21 16:36:28.166996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.224 [2024-07-21 16:36:28.167005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.224 [2024-07-21 16:36:28.167015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.224 [2024-07-21 16:36:28.167024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.224 [2024-07-21 16:36:28.167042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.224 [2024-07-21 16:36:28.167052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:10.225 [2024-07-21 16:36:28.167765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.225 [2024-07-21 16:36:28.167788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.225 [2024-07-21 16:36:28.167808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.225 [2024-07-21 16:36:28.167827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.225 [2024-07-21 16:36:28.167847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.225 [2024-07-21 16:36:28.167866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:86528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.225 [2024-07-21 16:36:28.167885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:86536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.225 [2024-07-21 16:36:28.167904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.225 [2024-07-21 16:36:28.167915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:86544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.225 [2024-07-21 16:36:28.167924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.226 [2024-07-21 16:36:28.167934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.226 [2024-07-21 16:36:28.167943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.226 [2024-07-21 16:36:28.167953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.226 [2024-07-21 16:36:28.167962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.226 [2024-07-21 16:36:28.167972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.226 [2024-07-21 16:36:28.167980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.226 [2024-07-21 16:36:28.167990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.226 [2024-07-21 16:36:28.167999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.226 [2024-07-21 16:36:28.168009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.226 [2024-07-21 16:36:28.168018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.226 [2024-07-21 16:36:28.168035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.226 [2024-07-21 16:36:28.168044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.226 [2024-07-21 16:36:28.168055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:86600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.226 [2024-07-21 16:36:28.168064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.226 [2024-07-21 16:36:28.168075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.226 [2024-07-21 16:36:28.168084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.226 [2024-07-21 16:36:28.168094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6858d0 is same with the state(5) to be set 00:21:10.226 [2024-07-21 16:36:28.168111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:10.226 [2024-07-21 16:36:28.168119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:10.226 [2024-07-21 16:36:28.168128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86616 len:8 PRP1 0x0 PRP2 0x0 00:21:10.226 [2024-07-21 16:36:28.168137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.226 [2024-07-21 16:36:28.168188] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6858d0 was disconnected and freed. reset controller. 00:21:10.226 [2024-07-21 16:36:28.168293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.226 [2024-07-21 16:36:28.168311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.226 [2024-07-21 16:36:28.168322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.226 [2024-07-21 16:36:28.168331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.226 [2024-07-21 16:36:28.168342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.226 [2024-07-21 16:36:28.168351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.226 [2024-07-21 16:36:28.168360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.226 [2024-07-21 16:36:28.168369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.226 [2024-07-21 16:36:28.168378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618240 is same with the state(5) to be set 00:21:10.226 [2024-07-21 16:36:28.168571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:10.226 [2024-07-21 16:36:28.168605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x618240 (9): Bad file descriptor 00:21:10.226 [2024-07-21 16:36:28.168687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:10.226 [2024-07-21 16:36:28.168709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x618240 with addr=10.0.0.2, port=4420 00:21:10.226 [2024-07-21 16:36:28.168721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618240 is same with the state(5) to be set 00:21:10.226 [2024-07-21 16:36:28.168741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x618240 (9): Bad file descriptor 00:21:10.226 [2024-07-21 16:36:28.168757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:10.226 [2024-07-21 16:36:28.168767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:10.226 [2024-07-21 16:36:28.168777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:10.226 [2024-07-21 16:36:28.168797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:10.226 [2024-07-21 16:36:28.182017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:10.226 16:36:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:21:11.162 [2024-07-21 16:36:29.182122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:11.162 [2024-07-21 16:36:29.182177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x618240 with addr=10.0.0.2, port=4420 00:21:11.162 [2024-07-21 16:36:29.182194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618240 is same with the state(5) to be set 00:21:11.162 [2024-07-21 16:36:29.182213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x618240 (9): Bad file descriptor 00:21:11.162 [2024-07-21 16:36:29.182230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:11.162 [2024-07-21 16:36:29.182239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:11.162 [2024-07-21 16:36:29.182248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:11.162 [2024-07-21 16:36:29.182283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:11.162 [2024-07-21 16:36:29.182307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:11.162 16:36:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:11.421 [2024-07-21 16:36:29.432805] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.421 16:36:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 96651 00:21:11.987 [2024-07-21 16:36:30.194164] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:20.099 00:21:20.099 Latency(us) 00:21:20.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.099 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:20.099 Verification LBA range: start 0x0 length 0x4000 00:21:20.099 NVMe0n1 : 10.01 7533.00 29.43 0.00 0.00 16966.17 1824.58 3035150.89 00:21:20.099 =================================================================================================================== 00:21:20.099 Total : 7533.00 29.43 0.00 0.00 16966.17 1824.58 3035150.89 00:21:20.099 0 00:21:20.100 16:36:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=96770 00:21:20.100 16:36:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:20.100 16:36:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:21:20.100 Running I/O for 10 seconds... 00:21:20.100 16:36:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:20.100 [2024-07-21 16:36:38.269386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.269998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.270005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.270015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.270022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.270029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.270036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.270043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.270050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.270057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.270064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.270073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.270080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.270087] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.270094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.270101] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.270107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.270114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.270121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.100 [2024-07-21 16:36:38.270136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270256] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270416] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.270486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb221c0 is same with the state(5) to be set 00:21:20.101 [2024-07-21 16:36:38.272228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.101 [2024-07-21 16:36:38.272285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.101 [2024-07-21 16:36:38.272311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.101 [2024-07-21 16:36:38.272323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.101 [2024-07-21 16:36:38.272335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.101 [2024-07-21 16:36:38.272344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.101 [2024-07-21 16:36:38.272355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.101 [2024-07-21 16:36:38.272364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.101 [2024-07-21 16:36:38.272375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.101 [2024-07-21 16:36:38.272387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.101 [2024-07-21 16:36:38.272398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.101 [2024-07-21 16:36:38.272407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.101 [2024-07-21 16:36:38.272417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.101 [2024-07-21 16:36:38.272426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.101 [2024-07-21 16:36:38.272436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.101 [2024-07-21 16:36:38.272446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.101 [2024-07-21 16:36:38.272456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.101 [2024-07-21 16:36:38.272465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.101 [2024-07-21 16:36:38.272475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.101 [2024-07-21 16:36:38.272484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.101 [2024-07-21 16:36:38.272494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.101 [2024-07-21 16:36:38.272503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.101 [2024-07-21 16:36:38.272514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.101 [2024-07-21 16:36:38.272522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.101 [2024-07-21 16:36:38.272533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.101 [2024-07-21 16:36:38.272541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.101 [2024-07-21 16:36:38.272552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.101 [2024-07-21 16:36:38.272561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.101 [2024-07-21 16:36:38.272571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.101 [2024-07-21 16:36:38.272579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.101 [2024-07-21 16:36:38.272590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.101 [2024-07-21 16:36:38.272598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.101 [2024-07-21 16:36:38.272609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.101 [2024-07-21 16:36:38.272622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.101 [2024-07-21 16:36:38.272633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.101 [2024-07-21 16:36:38.272642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.101 [2024-07-21 16:36:38.272653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.101 [2024-07-21 16:36:38.272663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.101 [2024-07-21 16:36:38.272674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.101 [2024-07-21 16:36:38.272682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.101 [2024-07-21 16:36:38.272692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.101 [2024-07-21 16:36:38.272701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.101 [2024-07-21 16:36:38.272711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.101 [2024-07-21 16:36:38.272720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.101 [2024-07-21 16:36:38.272730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.101 [2024-07-21 16:36:38.272740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.272751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.102 [2024-07-21 16:36:38.272760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.272770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.102 [2024-07-21 16:36:38.272779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.272790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.102 [2024-07-21 16:36:38.272799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.272810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.102 [2024-07-21 16:36:38.272819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.272829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.102 [2024-07-21 16:36:38.272838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.272849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.102 [2024-07-21 16:36:38.272858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.272870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.272879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.272890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.272899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.272910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.272919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.272929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.272939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.272952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.272961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.272971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.272980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.272991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.273000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.273010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.273019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.273029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.273038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.273049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.273058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.273068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.273077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.273088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.273096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.273107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.273115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.273125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.273135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.273145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.273154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.273164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.273173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.273184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.273194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.273204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.273213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.273224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.273241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.273253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.273275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.273289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.273298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.273309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.273318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.273329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.273338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.273348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.273357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.273368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.273377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.273388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.273396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.273407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.273416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.273427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.273435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.273445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.273455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.273465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.273473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.273484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.273493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.273504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.273513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.273523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.273531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.273541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.102 [2024-07-21 16:36:38.273550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.102 [2024-07-21 16:36:38.273560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.273576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.273587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.273597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.273607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.273616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.273627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.273636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.273647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.273656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.273666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.273674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.273684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.273696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.273707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.273716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.273727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.273736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.273746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.273755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.273766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.273775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.273785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.273794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.273804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.273813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.273823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.273832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.273843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.273852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.273862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.273870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.273881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.273896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.273908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.273917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.273927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.273936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.273947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.273957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.273968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.273977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.273988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.273997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.274007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.274016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.274026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.274035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.274046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.274055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.274066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.274075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.274085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.274094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.274105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.274114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.274125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.274134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.274145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.274154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.274165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.274174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.274185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.103 [2024-07-21 16:36:38.274194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.274225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.103 [2024-07-21 16:36:38.274245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100464 len:8 PRP1 0x0 PRP2 0x0 00:21:20.103 [2024-07-21 16:36:38.274274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.274291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.103 [2024-07-21 16:36:38.274300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.103 [2024-07-21 16:36:38.274308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100472 len:8 PRP1 0x0 PRP2 0x0 00:21:20.103 [2024-07-21 16:36:38.274317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.274326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.103 [2024-07-21 16:36:38.274334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.103 [2024-07-21 16:36:38.274341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100480 len:8 PRP1 0x0 PRP2 0x0 00:21:20.103 [2024-07-21 16:36:38.274351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.274360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.103 [2024-07-21 16:36:38.274367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.103 [2024-07-21 16:36:38.274375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100488 len:8 PRP1 0x0 PRP2 0x0 00:21:20.103 [2024-07-21 16:36:38.274384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.274393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.103 [2024-07-21 16:36:38.274400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.103 [2024-07-21 16:36:38.274408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100496 len:8 PRP1 0x0 PRP2 0x0 00:21:20.103 [2024-07-21 16:36:38.274416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.274434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.103 [2024-07-21 16:36:38.274443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.103 [2024-07-21 16:36:38.274451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100504 len:8 PRP1 0x0 PRP2 0x0 00:21:20.103 [2024-07-21 16:36:38.274460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.103 [2024-07-21 16:36:38.274469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.103 [2024-07-21 16:36:38.274476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.103 [2024-07-21 16:36:38.274483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100512 len:8 PRP1 0x0 PRP2 0x0 00:21:20.104 [2024-07-21 16:36:38.274492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.104 [2024-07-21 16:36:38.274500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.104 [2024-07-21 16:36:38.274507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.104 [2024-07-21 16:36:38.274515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100520 len:8 PRP1 0x0 PRP2 0x0 00:21:20.104 [2024-07-21 16:36:38.274523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.104 [2024-07-21 16:36:38.274532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.104 [2024-07-21 16:36:38.274540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.104 [2024-07-21 16:36:38.274562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100528 len:8 PRP1 0x0 PRP2 0x0 00:21:20.104 [2024-07-21 16:36:38.274577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.104 [2024-07-21 16:36:38.274587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.104 [2024-07-21 16:36:38.274594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.104 [2024-07-21 16:36:38.274602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100536 len:8 PRP1 0x0 PRP2 0x0 00:21:20.104 [2024-07-21 16:36:38.274611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.104 [2024-07-21 16:36:38.274620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.104 [2024-07-21 16:36:38.274627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.104 [2024-07-21 16:36:38.274635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100544 len:8 PRP1 0x0 PRP2 0x0 00:21:20.104 [2024-07-21 16:36:38.274643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.104 [2024-07-21 16:36:38.274652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.104 [2024-07-21 16:36:38.274659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.104 [2024-07-21 16:36:38.274666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100552 len:8 PRP1 0x0 PRP2 0x0 00:21:20.104 [2024-07-21 16:36:38.274675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.104 [2024-07-21 16:36:38.274684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.104 [2024-07-21 16:36:38.274691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.104 [2024-07-21 16:36:38.274698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100560 len:8 PRP1 0x0 PRP2 0x0 00:21:20.104 [2024-07-21 16:36:38.274707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.104 [2024-07-21 16:36:38.274715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.104 [2024-07-21 16:36:38.274722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.104 [2024-07-21 16:36:38.274730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100568 len:8 PRP1 0x0 PRP2 0x0 00:21:20.104 [2024-07-21 16:36:38.274738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.104 [2024-07-21 16:36:38.274746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.104 [2024-07-21 16:36:38.274753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.104 [2024-07-21 16:36:38.274760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100576 len:8 PRP1 0x0 PRP2 0x0 00:21:20.104 [2024-07-21 16:36:38.274768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.104 [2024-07-21 16:36:38.274777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.104 [2024-07-21 16:36:38.274784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.104 [2024-07-21 16:36:38.274791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100584 len:8 PRP1 0x0 PRP2 0x0 00:21:20.104 [2024-07-21 16:36:38.274800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.104 [2024-07-21 16:36:38.274809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.104 [2024-07-21 16:36:38.274816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.104 [2024-07-21 16:36:38.274831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100592 len:8 PRP1 0x0 PRP2 0x0 00:21:20.104 [2024-07-21 16:36:38.274846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.104 [2024-07-21 16:36:38.274858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.104 [2024-07-21 16:36:38.274865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.104 [2024-07-21 16:36:38.274873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100600 len:8 PRP1 0x0 PRP2 0x0 00:21:20.104 [2024-07-21 16:36:38.274882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.104 [2024-07-21 16:36:38.274891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.104 [2024-07-21 16:36:38.274898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.104 [2024-07-21 16:36:38.274906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100608 len:8 PRP1 0x0 PRP2 0x0 00:21:20.104 [2024-07-21 16:36:38.274914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.104 [2024-07-21 16:36:38.274923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.104 [2024-07-21 16:36:38.274930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.104 [2024-07-21 16:36:38.274938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100616 len:8 PRP1 0x0 PRP2 0x0 00:21:20.104 [2024-07-21 16:36:38.274946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.104 [2024-07-21 16:36:38.274954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.104 [2024-07-21 16:36:38.274961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.104 [2024-07-21 16:36:38.274968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100624 len:8 PRP1 0x0 PRP2 0x0 00:21:20.104 [2024-07-21 16:36:38.274977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.104 [2024-07-21 16:36:38.274986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.104 [2024-07-21 16:36:38.274993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.104 [2024-07-21 16:36:38.275000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100632 len:8 PRP1 0x0 PRP2 0x0 00:21:20.104 [2024-07-21 16:36:38.275009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.104 [2024-07-21 16:36:38.275017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.104 [2024-07-21 16:36:38.275025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.104 16:36:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:21:20.104 [2024-07-21 16:36:38.301497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100640 len:8 PRP1 0x0 PRP2 0x0 00:21:20.104 [2024-07-21 16:36:38.301540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.104 [2024-07-21 16:36:38.301563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.104 [2024-07-21 16:36:38.301577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.104 [2024-07-21 16:36:38.301607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100648 len:8 PRP1 0x0 PRP2 0x0 00:21:20.104 [2024-07-21 16:36:38.301622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.104 [2024-07-21 16:36:38.301636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.104 [2024-07-21 16:36:38.301647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.104 [2024-07-21 16:36:38.301660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100656 len:8 PRP1 0x0 PRP2 0x0 00:21:20.104 [2024-07-21 16:36:38.301685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.104 [2024-07-21 16:36:38.301698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.104 [2024-07-21 16:36:38.301709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.104 [2024-07-21 16:36:38.301721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100664 len:8 PRP1 0x0 PRP2 0x0 00:21:20.104 [2024-07-21 16:36:38.301736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.104 [2024-07-21 16:36:38.301762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.104 [2024-07-21 16:36:38.301784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.105 [2024-07-21 16:36:38.301795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100672 len:8 PRP1 0x0 PRP2 0x0 00:21:20.105 [2024-07-21 16:36:38.301807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.105 [2024-07-21 16:36:38.301821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.105 [2024-07-21 16:36:38.301832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.105 [2024-07-21 16:36:38.301843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100680 len:8 PRP1 0x0 PRP2 0x0 00:21:20.105 [2024-07-21 16:36:38.301856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.105 [2024-07-21 16:36:38.301870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.105 [2024-07-21 16:36:38.301883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.105 [2024-07-21 16:36:38.301895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100688 len:8 PRP1 0x0 PRP2 0x0 00:21:20.105 [2024-07-21 16:36:38.301908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.105 [2024-07-21 16:36:38.301921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.105 [2024-07-21 16:36:38.301943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.105 [2024-07-21 16:36:38.301955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100696 len:8 PRP1 0x0 PRP2 0x0 00:21:20.105 [2024-07-21 16:36:38.301967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.105 [2024-07-21 16:36:38.301981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.105 [2024-07-21 16:36:38.301992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.105 [2024-07-21 16:36:38.302003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100704 len:8 PRP1 0x0 PRP2 0x0 00:21:20.105 [2024-07-21 16:36:38.302016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.105 [2024-07-21 16:36:38.302029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.105 [2024-07-21 16:36:38.302039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.105 [2024-07-21 16:36:38.302050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100712 len:8 PRP1 0x0 PRP2 0x0 00:21:20.105 [2024-07-21 16:36:38.302063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.105 [2024-07-21 16:36:38.302075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:20.105 [2024-07-21 16:36:38.302086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:20.105 [2024-07-21 16:36:38.302098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100720 len:8 PRP1 0x0 PRP2 0x0 00:21:20.105 [2024-07-21 16:36:38.302111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.105 [2024-07-21 16:36:38.302189] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x698c60 was disconnected and freed. reset controller. 00:21:20.105 [2024-07-21 16:36:38.302344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.105 [2024-07-21 16:36:38.302379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.105 [2024-07-21 16:36:38.302397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.105 [2024-07-21 16:36:38.302412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.105 [2024-07-21 16:36:38.302450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.105 [2024-07-21 16:36:38.302465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.105 [2024-07-21 16:36:38.302481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:20.105 [2024-07-21 16:36:38.302504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.105 [2024-07-21 16:36:38.302518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618240 is same with the state(5) to be set 00:21:20.105 [2024-07-21 16:36:38.302832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:20.105 [2024-07-21 16:36:38.302887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x618240 (9): Bad file descriptor 00:21:20.105 [2024-07-21 16:36:38.303034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:20.105 [2024-07-21 16:36:38.303065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x618240 with addr=10.0.0.2, port=4420 00:21:20.105 [2024-07-21 16:36:38.303081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618240 is same with the state(5) to be set 00:21:20.105 [2024-07-21 16:36:38.303110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x618240 (9): Bad file descriptor 00:21:20.105 [2024-07-21 16:36:38.303134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:20.105 [2024-07-21 16:36:38.303148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:20.105 [2024-07-21 16:36:38.303164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:20.105 [2024-07-21 16:36:38.303193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:20.105 [2024-07-21 16:36:38.303210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:21.478 [2024-07-21 16:36:39.303305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.478 [2024-07-21 16:36:39.303356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x618240 with addr=10.0.0.2, port=4420 00:21:21.478 [2024-07-21 16:36:39.303374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618240 is same with the state(5) to be set 00:21:21.478 [2024-07-21 16:36:39.303394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x618240 (9): Bad file descriptor 00:21:21.478 [2024-07-21 16:36:39.303412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:21.478 [2024-07-21 16:36:39.303422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:21.478 [2024-07-21 16:36:39.303431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:21.478 [2024-07-21 16:36:39.303451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:21.478 [2024-07-21 16:36:39.303464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.411 [2024-07-21 16:36:40.303539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.411 [2024-07-21 16:36:40.303592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x618240 with addr=10.0.0.2, port=4420 00:21:22.411 [2024-07-21 16:36:40.303606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618240 is same with the state(5) to be set 00:21:22.411 [2024-07-21 16:36:40.303625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x618240 (9): Bad file descriptor 00:21:22.411 [2024-07-21 16:36:40.303642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.411 [2024-07-21 16:36:40.303652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.411 [2024-07-21 16:36:40.303661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.411 [2024-07-21 16:36:40.303680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.411 [2024-07-21 16:36:40.303692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.344 16:36:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:23.344 [2024-07-21 16:36:41.306500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.344 [2024-07-21 16:36:41.306544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x618240 with addr=10.0.0.2, port=4420 00:21:23.344 [2024-07-21 16:36:41.306559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618240 is same with the state(5) to be set 00:21:23.344 [2024-07-21 16:36:41.306767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x618240 (9): Bad file descriptor 00:21:23.344 [2024-07-21 16:36:41.306985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.344 [2024-07-21 16:36:41.307007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.345 [2024-07-21 16:36:41.307018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.345 [2024-07-21 16:36:41.310172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.345 [2024-07-21 16:36:41.310213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.345 [2024-07-21 16:36:41.535376] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.602 16:36:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 96770 00:21:24.169 [2024-07-21 16:36:42.346208] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:29.443 00:21:29.443 Latency(us) 00:21:29.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.443 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:29.443 Verification LBA range: start 0x0 length 0x4000 00:21:29.443 NVMe0n1 : 10.00 6656.77 26.00 4459.31 0.00 11484.51 525.03 3050402.91 00:21:29.443 =================================================================================================================== 00:21:29.443 Total : 6656.77 26.00 4459.31 0.00 11484.51 0.00 3050402.91 00:21:29.443 0 00:21:29.443 16:36:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 96599 00:21:29.443 16:36:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96599 ']' 00:21:29.443 16:36:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96599 00:21:29.443 16:36:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:21:29.443 16:36:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:29.443 16:36:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96599 00:21:29.443 killing process with pid 96599 00:21:29.443 Received shutdown signal, test time was about 10.000000 seconds 00:21:29.443 00:21:29.443 Latency(us) 00:21:29.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.443 =================================================================================================================== 00:21:29.443 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:29.443 16:36:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:29.443 16:36:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:29.443 16:36:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96599' 00:21:29.444 16:36:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96599 00:21:29.444 16:36:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96599 00:21:29.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:29.444 16:36:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=96895 00:21:29.444 16:36:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:21:29.444 16:36:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 96895 /var/tmp/bdevperf.sock 00:21:29.444 16:36:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96895 ']' 00:21:29.444 16:36:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:29.444 16:36:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:29.444 16:36:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:29.444 16:36:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:29.444 16:36:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:29.444 [2024-07-21 16:36:47.517637] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:21:29.444 [2024-07-21 16:36:47.517738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96895 ] 00:21:29.444 [2024-07-21 16:36:47.646209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.701 [2024-07-21 16:36:47.730164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:30.635 16:36:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:30.635 16:36:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:21:30.635 16:36:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=96919 00:21:30.635 16:36:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96895 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:21:30.635 16:36:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:21:30.635 16:36:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:30.893 NVMe0n1 00:21:30.893 16:36:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:30.893 16:36:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=96978 00:21:30.893 16:36:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:21:31.151 Running I/O for 10 seconds... 00:21:32.086 16:36:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:32.347 [2024-07-21 16:36:50.321074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25360 is same with the state(5) to be set 00:21:32.347 [2024-07-21 16:36:50.321140] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25360 is same with the state(5) to be set 00:21:32.347 [2024-07-21 16:36:50.321150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25360 is same with the state(5) to be set 00:21:32.347 [2024-07-21 16:36:50.321158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25360 is same with the state(5) to be set 00:21:32.347 [2024-07-21 16:36:50.321166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25360 is same with the state(5) to be set 00:21:32.347 [2024-07-21 16:36:50.321174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25360 is same with the state(5) to be set 00:21:32.347 [2024-07-21 16:36:50.321182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25360 is same with the state(5) to be set 00:21:32.347 [2024-07-21 16:36:50.321190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25360 is same with the state(5) to be set 00:21:32.347 [2024-07-21 16:36:50.321198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25360 is same with the state(5) to be set 00:21:32.347 [2024-07-21 16:36:50.321205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25360 is same with the state(5) to be set 00:21:32.347 [2024-07-21 16:36:50.321213] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25360 is same with the state(5) to be set 00:21:32.347 [2024-07-21 16:36:50.321221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25360 is same with the state(5) to be set 00:21:32.347 [2024-07-21 16:36:50.321228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25360 is same with the state(5) to be set 00:21:32.347 [2024-07-21 16:36:50.321235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25360 is same with the state(5) to be set 00:21:32.347 [2024-07-21 16:36:50.321242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25360 is same with the state(5) to be set 00:21:32.347 [2024-07-21 16:36:50.321249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25360 is same with the state(5) to be set 00:21:32.347 [2024-07-21 16:36:50.321256] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25360 is same with the state(5) to be set 00:21:32.347 [2024-07-21 16:36:50.321277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25360 is same with the state(5) to be set 00:21:32.347 [2024-07-21 16:36:50.321286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25360 is same with the state(5) to be set 00:21:32.347 [2024-07-21 16:36:50.321293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25360 is same with the state(5) to be set 00:21:32.347 [2024-07-21 16:36:50.321300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25360 is same with the state(5) to be set 00:21:32.347 [2024-07-21 16:36:50.321310] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25360 is same with the state(5) to be set 00:21:32.347 [2024-07-21 16:36:50.321317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25360 is same with the state(5) to be set 00:21:32.347 [2024-07-21 16:36:50.321323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25360 is same with the state(5) to be set 00:21:32.347 [2024-07-21 16:36:50.321330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25360 is same with the state(5) to be set 00:21:32.347 [2024-07-21 16:36:50.321337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25360 is same with the state(5) to be set 00:21:32.347 [2024-07-21 16:36:50.321344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25360 is same with the state(5) to be set 00:21:32.347 [2024-07-21 16:36:50.321352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25360 is same with the state(5) to be set 00:21:32.347 [2024-07-21 16:36:50.321358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25360 is same with the state(5) to be set 00:21:32.347 [2024-07-21 16:36:50.321706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:55280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.347 [2024-07-21 16:36:50.321738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.347 [2024-07-21 16:36:50.321760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.347 [2024-07-21 16:36:50.321771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.347 [2024-07-21 16:36:50.321783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.347 [2024-07-21 16:36:50.321792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.347 [2024-07-21 16:36:50.321803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:69384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.347 [2024-07-21 16:36:50.321813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.347 [2024-07-21 16:36:50.321824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.347 [2024-07-21 16:36:50.321833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.347 [2024-07-21 16:36:50.321844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.347 [2024-07-21 16:36:50.321853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.347 [2024-07-21 16:36:50.321863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.347 [2024-07-21 16:36:50.321872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.347 [2024-07-21 16:36:50.321883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.347 [2024-07-21 16:36:50.321891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.347 [2024-07-21 16:36:50.321902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.347 [2024-07-21 16:36:50.321910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.347 [2024-07-21 16:36:50.321921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.347 [2024-07-21 16:36:50.321929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.347 [2024-07-21 16:36:50.321940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.347 [2024-07-21 16:36:50.321948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.347 [2024-07-21 16:36:50.321961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:52120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.347 [2024-07-21 16:36:50.321970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.347 [2024-07-21 16:36:50.321980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.347 [2024-07-21 16:36:50.321989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.347 [2024-07-21 16:36:50.322000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.347 [2024-07-21 16:36:50.322008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.347 [2024-07-21 16:36:50.322018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.347 [2024-07-21 16:36:50.322027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.347 [2024-07-21 16:36:50.322038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.347 [2024-07-21 16:36:50.322047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.347 [2024-07-21 16:36:50.322059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.347 [2024-07-21 16:36:50.322068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.347 [2024-07-21 16:36:50.322079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.347 [2024-07-21 16:36:50.322088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.347 [2024-07-21 16:36:50.322099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.347 [2024-07-21 16:36:50.322108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.347 [2024-07-21 16:36:50.322118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.347 [2024-07-21 16:36:50.322128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.347 [2024-07-21 16:36:50.322139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.347 [2024-07-21 16:36:50.322149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.347 [2024-07-21 16:36:50.322159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.347 [2024-07-21 16:36:50.322169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.347 [2024-07-21 16:36:50.322179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:68616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.347 [2024-07-21 16:36:50.322188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.347 [2024-07-21 16:36:50.322198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:37584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.347 [2024-07-21 16:36:50.322207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.347 [2024-07-21 16:36:50.322218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:67640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.347 [2024-07-21 16:36:50.322227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.347 [2024-07-21 16:36:50.322238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.347 [2024-07-21 16:36:50.322247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.347 [2024-07-21 16:36:50.322257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:36520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:43120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:34824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:29880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:116824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:104592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:53080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:26264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:123592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.322987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.322997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.323006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.323018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.323027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.323037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.323046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.323057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:114656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.323066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.323077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.323085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.323096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:90376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.323105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.323122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.348 [2024-07-21 16:36:50.323132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.348 [2024-07-21 16:36:50.323142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:108128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:31696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:32080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:52920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:73664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:27376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:60904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:119512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:68896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:106256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:26192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:115592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:129496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:104056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:114776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:68544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:33936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.323989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.349 [2024-07-21 16:36:50.323999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.349 [2024-07-21 16:36:50.324008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.350 [2024-07-21 16:36:50.324019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.350 [2024-07-21 16:36:50.324028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.350 [2024-07-21 16:36:50.324039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.350 [2024-07-21 16:36:50.324048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.350 [2024-07-21 16:36:50.324058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.350 [2024-07-21 16:36:50.324068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.350 [2024-07-21 16:36:50.324078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.350 [2024-07-21 16:36:50.324087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.350 [2024-07-21 16:36:50.324103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:33368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.350 [2024-07-21 16:36:50.324112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.350 [2024-07-21 16:36:50.324123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.350 [2024-07-21 16:36:50.324132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.350 [2024-07-21 16:36:50.324142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.350 [2024-07-21 16:36:50.324151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.350 [2024-07-21 16:36:50.324163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.350 [2024-07-21 16:36:50.324172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.350 [2024-07-21 16:36:50.324182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.350 [2024-07-21 16:36:50.324191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.350 [2024-07-21 16:36:50.324202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.350 [2024-07-21 16:36:50.324210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.350 [2024-07-21 16:36:50.324221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.350 [2024-07-21 16:36:50.324231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.350 [2024-07-21 16:36:50.324241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.350 [2024-07-21 16:36:50.324249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.350 [2024-07-21 16:36:50.324270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.350 [2024-07-21 16:36:50.324287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.350 [2024-07-21 16:36:50.324298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.350 [2024-07-21 16:36:50.324307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.350 [2024-07-21 16:36:50.324317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.350 [2024-07-21 16:36:50.324327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.350 [2024-07-21 16:36:50.324337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:52072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.350 [2024-07-21 16:36:50.324352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.350 [2024-07-21 16:36:50.324362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9198d0 is same with the state(5) to be set 00:21:32.350 [2024-07-21 16:36:50.324375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:32.350 [2024-07-21 16:36:50.324382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:32.350 [2024-07-21 16:36:50.324390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3720 len:8 PRP1 0x0 PRP2 0x0 00:21:32.350 [2024-07-21 16:36:50.324398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.350 [2024-07-21 16:36:50.324459] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9198d0 was disconnected and freed. reset controller. 00:21:32.350 [2024-07-21 16:36:50.324709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:32.350 [2024-07-21 16:36:50.324815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8ac240 (9): Bad file descriptor 00:21:32.350 [2024-07-21 16:36:50.324955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:32.350 [2024-07-21 16:36:50.324979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8ac240 with addr=10.0.0.2, port=4420 00:21:32.350 [2024-07-21 16:36:50.324989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ac240 is same with the state(5) to be set 00:21:32.350 [2024-07-21 16:36:50.325008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8ac240 (9): Bad file descriptor 00:21:32.350 [2024-07-21 16:36:50.325025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:32.350 [2024-07-21 16:36:50.325034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:32.350 [2024-07-21 16:36:50.325045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:32.350 [2024-07-21 16:36:50.325065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:32.350 [2024-07-21 16:36:50.325077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:32.350 16:36:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 96978 00:21:34.252 [2024-07-21 16:36:52.325170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.252 [2024-07-21 16:36:52.325220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8ac240 with addr=10.0.0.2, port=4420 00:21:34.252 [2024-07-21 16:36:52.325233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ac240 is same with the state(5) to be set 00:21:34.252 [2024-07-21 16:36:52.325253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8ac240 (9): Bad file descriptor 00:21:34.252 [2024-07-21 16:36:52.325286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.252 [2024-07-21 16:36:52.325298] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.252 [2024-07-21 16:36:52.325309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.252 [2024-07-21 16:36:52.325328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.252 [2024-07-21 16:36:52.325339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:36.155 [2024-07-21 16:36:54.325427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.155 [2024-07-21 16:36:54.325464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8ac240 with addr=10.0.0.2, port=4420 00:21:36.155 [2024-07-21 16:36:54.325477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ac240 is same with the state(5) to be set 00:21:36.155 [2024-07-21 16:36:54.325496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8ac240 (9): Bad file descriptor 00:21:36.155 [2024-07-21 16:36:54.325514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:36.155 [2024-07-21 16:36:54.325524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:36.155 [2024-07-21 16:36:54.325534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:36.155 [2024-07-21 16:36:54.325554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:36.155 [2024-07-21 16:36:54.325565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:38.684 [2024-07-21 16:36:56.325651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:38.684 [2024-07-21 16:36:56.325682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:38.684 [2024-07-21 16:36:56.325693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:38.684 [2024-07-21 16:36:56.325702] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:21:38.684 [2024-07-21 16:36:56.325721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:39.282 00:21:39.282 Latency(us) 00:21:39.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.282 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:21:39.282 NVMe0n1 : 8.18 3202.99 12.51 15.65 0.00 39738.49 3381.06 7015926.69 00:21:39.282 =================================================================================================================== 00:21:39.282 Total : 3202.99 12.51 15.65 0.00 39738.49 3381.06 7015926.69 00:21:39.282 0 00:21:39.282 16:36:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:39.282 Attaching 5 probes... 00:21:39.282 1312.861435: reset bdev controller NVMe0 00:21:39.282 1313.025552: reconnect bdev controller NVMe0 00:21:39.282 3313.267656: reconnect delay bdev controller NVMe0 00:21:39.282 3313.283459: reconnect bdev controller NVMe0 00:21:39.282 5313.529038: reconnect delay bdev controller NVMe0 00:21:39.282 5313.542916: reconnect bdev controller NVMe0 00:21:39.282 7313.800665: reconnect delay bdev controller NVMe0 00:21:39.282 7313.815285: reconnect bdev controller NVMe0 00:21:39.282 16:36:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:21:39.282 16:36:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:21:39.282 16:36:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 96919 00:21:39.282 16:36:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:39.282 16:36:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 96895 00:21:39.282 16:36:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96895 ']' 00:21:39.282 16:36:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96895 00:21:39.282 16:36:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:21:39.282 16:36:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:39.282 16:36:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96895 00:21:39.282 killing process with pid 96895 00:21:39.282 Received shutdown signal, test time was about 8.236819 seconds 00:21:39.282 00:21:39.282 Latency(us) 00:21:39.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.282 =================================================================================================================== 00:21:39.282 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:39.282 16:36:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:39.282 16:36:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:39.282 16:36:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96895' 00:21:39.282 16:36:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96895 00:21:39.282 16:36:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96895 00:21:39.540 16:36:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:39.799 16:36:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:21:39.799 16:36:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:21:39.799 16:36:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:39.799 16:36:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:21:39.799 16:36:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:39.799 16:36:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:21:39.799 16:36:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:39.799 16:36:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:39.799 rmmod nvme_tcp 00:21:39.799 rmmod nvme_fabrics 00:21:40.056 rmmod nvme_keyring 00:21:40.056 16:36:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:40.056 16:36:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:21:40.056 16:36:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:21:40.056 16:36:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 96303 ']' 00:21:40.056 16:36:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 96303 00:21:40.056 16:36:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96303 ']' 00:21:40.056 16:36:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96303 00:21:40.056 16:36:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:21:40.056 16:36:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:40.056 16:36:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96303 00:21:40.056 killing process with pid 96303 00:21:40.056 16:36:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:40.056 16:36:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:40.056 16:36:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96303' 00:21:40.057 16:36:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96303 00:21:40.057 16:36:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96303 00:21:40.314 16:36:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:40.314 16:36:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:40.314 16:36:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:40.314 16:36:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:40.314 16:36:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:40.314 16:36:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.314 16:36:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:40.314 16:36:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.314 16:36:58 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:40.314 00:21:40.314 real 0m47.125s 00:21:40.314 user 2m18.347s 00:21:40.314 sys 0m5.109s 00:21:40.314 16:36:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:40.314 ************************************ 00:21:40.314 END TEST nvmf_timeout 00:21:40.314 ************************************ 00:21:40.314 16:36:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:40.314 16:36:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:40.314 16:36:58 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:21:40.314 16:36:58 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:21:40.314 16:36:58 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:40.314 16:36:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:40.314 16:36:58 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:21:40.314 00:21:40.314 real 15m25.980s 00:21:40.314 user 40m57.902s 00:21:40.314 sys 3m25.267s 00:21:40.314 16:36:58 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:40.314 ************************************ 00:21:40.314 END TEST nvmf_tcp 00:21:40.314 ************************************ 00:21:40.314 16:36:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:40.573 16:36:58 -- common/autotest_common.sh@1142 -- # return 0 00:21:40.573 16:36:58 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:21:40.573 16:36:58 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:21:40.573 16:36:58 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:40.573 16:36:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:40.573 16:36:58 -- common/autotest_common.sh@10 -- # set +x 00:21:40.573 ************************************ 00:21:40.573 START TEST spdkcli_nvmf_tcp 00:21:40.573 ************************************ 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:21:40.573 * Looking for test storage... 00:21:40.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=97194 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 97194 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 97194 ']' 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:40.573 16:36:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:40.573 [2024-07-21 16:36:58.714901] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:21:40.573 [2024-07-21 16:36:58.714973] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97194 ] 00:21:40.832 [2024-07-21 16:36:58.847257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:40.832 [2024-07-21 16:36:58.934326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.832 [2024-07-21 16:36:58.934331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.769 16:36:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:41.769 16:36:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:21:41.769 16:36:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:21:41.769 16:36:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:41.769 16:36:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:41.769 16:36:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:21:41.769 16:36:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:21:41.769 16:36:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:21:41.769 16:36:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:41.769 16:36:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:41.769 16:36:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:21:41.769 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:21:41.769 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:21:41.769 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:21:41.769 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:21:41.769 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:21:41.769 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:21:41.769 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:21:41.769 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:21:41.769 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:21:41.769 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:41.769 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:41.769 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:21:41.769 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:41.769 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:41.769 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:21:41.769 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:41.769 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:21:41.769 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:21:41.769 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:41.769 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:21:41.769 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:21:41.769 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:21:41.769 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:21:41.769 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:41.769 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:21:41.769 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:21:41.769 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:21:41.769 ' 00:21:44.301 [2024-07-21 16:37:02.392466] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.673 [2024-07-21 16:37:03.661339] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:21:48.198 [2024-07-21 16:37:06.014423] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:21:50.096 [2024-07-21 16:37:08.039603] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:21:51.471 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:21:51.471 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:21:51.471 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:21:51.471 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:21:51.471 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:21:51.471 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:21:51.471 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:21:51.471 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:21:51.471 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:21:51.471 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:21:51.471 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:51.471 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:51.471 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:21:51.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:51.472 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:51.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:21:51.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:51.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:21:51.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:21:51.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:51.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:21:51.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:21:51.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:21:51.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:21:51.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:51.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:21:51.472 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:21:51.472 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:21:51.730 16:37:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:21:51.730 16:37:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:51.730 16:37:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:51.730 16:37:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:21:51.730 16:37:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:51.730 16:37:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:51.730 16:37:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:21:51.730 16:37:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:21:51.988 16:37:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:21:52.256 16:37:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:21:52.256 16:37:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:21:52.256 16:37:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:52.256 16:37:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:52.256 16:37:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:21:52.256 16:37:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:52.256 16:37:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:52.256 16:37:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:21:52.256 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:21:52.256 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:21:52.256 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:21:52.256 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:21:52.256 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:21:52.256 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:21:52.256 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:21:52.256 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:21:52.256 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:21:52.256 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:21:52.256 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:21:52.256 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:21:52.256 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:21:52.256 ' 00:21:57.536 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:21:57.537 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:21:57.537 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:21:57.537 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:21:57.537 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:21:57.537 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:21:57.537 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:21:57.537 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:21:57.537 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:21:57.537 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:21:57.537 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:21:57.537 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:21:57.537 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:21:57.537 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:21:57.537 16:37:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:21:57.537 16:37:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:57.537 16:37:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:57.537 16:37:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 97194 00:21:57.537 16:37:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 97194 ']' 00:21:57.537 16:37:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 97194 00:21:57.537 16:37:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:21:57.537 16:37:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:57.537 16:37:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97194 00:21:57.794 16:37:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:57.794 killing process with pid 97194 00:21:57.794 16:37:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:57.794 16:37:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97194' 00:21:57.794 16:37:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 97194 00:21:57.794 16:37:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 97194 00:21:58.052 16:37:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:21:58.052 16:37:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:21:58.052 16:37:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 97194 ']' 00:21:58.052 16:37:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 97194 00:21:58.052 16:37:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 97194 ']' 00:21:58.052 16:37:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 97194 00:21:58.052 Process with pid 97194 is not found 00:21:58.052 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (97194) - No such process 00:21:58.052 16:37:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 97194 is not found' 00:21:58.052 16:37:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:21:58.052 16:37:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:21:58.052 16:37:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:21:58.052 ************************************ 00:21:58.052 END TEST spdkcli_nvmf_tcp 00:21:58.052 ************************************ 00:21:58.052 00:21:58.052 real 0m17.481s 00:21:58.052 user 0m37.771s 00:21:58.052 sys 0m0.942s 00:21:58.052 16:37:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:58.052 16:37:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:58.052 16:37:16 -- common/autotest_common.sh@1142 -- # return 0 00:21:58.052 16:37:16 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:21:58.052 16:37:16 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:58.052 16:37:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:58.052 16:37:16 -- common/autotest_common.sh@10 -- # set +x 00:21:58.052 ************************************ 00:21:58.052 START TEST nvmf_identify_passthru 00:21:58.052 ************************************ 00:21:58.052 16:37:16 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:21:58.052 * Looking for test storage... 00:21:58.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:58.052 16:37:16 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:58.052 16:37:16 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:58.052 16:37:16 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:58.052 16:37:16 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:58.052 16:37:16 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.052 16:37:16 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.052 16:37:16 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.052 16:37:16 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:21:58.052 16:37:16 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:58.052 16:37:16 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:58.052 16:37:16 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:58.052 16:37:16 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:58.052 16:37:16 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:58.052 16:37:16 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.052 16:37:16 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.052 16:37:16 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.052 16:37:16 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:21:58.052 16:37:16 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.052 16:37:16 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.052 16:37:16 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:58.052 16:37:16 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:58.052 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:58.053 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:58.053 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:58.053 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:58.053 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:58.053 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:58.053 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:58.053 Cannot find device "nvmf_tgt_br" 00:21:58.053 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:21:58.053 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:58.053 Cannot find device "nvmf_tgt_br2" 00:21:58.053 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:21:58.053 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:58.053 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:58.310 Cannot find device "nvmf_tgt_br" 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:58.310 Cannot find device "nvmf_tgt_br2" 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:58.310 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:58.310 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:58.310 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:58.311 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:58.568 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:58.568 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:58.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:58.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:21:58.568 00:21:58.568 --- 10.0.0.2 ping statistics --- 00:21:58.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.568 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:21:58.568 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:58.568 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:58.568 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:21:58.568 00:21:58.568 --- 10.0.0.3 ping statistics --- 00:21:58.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.568 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:21:58.568 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:58.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:58.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:21:58.568 00:21:58.568 --- 10.0.0.1 ping statistics --- 00:21:58.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.568 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:21:58.568 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:58.568 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:21:58.568 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:58.568 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:58.568 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:58.568 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:58.568 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:58.568 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:58.568 16:37:16 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:58.568 16:37:16 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:21:58.568 16:37:16 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:58.568 16:37:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:58.568 16:37:16 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:21:58.568 16:37:16 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:21:58.568 16:37:16 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:21:58.568 16:37:16 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:21:58.568 16:37:16 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:21:58.568 16:37:16 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:21:58.568 16:37:16 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:21:58.568 16:37:16 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:58.568 16:37:16 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:58.568 16:37:16 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:21:58.568 16:37:16 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:21:58.568 16:37:16 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:58.568 16:37:16 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:21:58.568 16:37:16 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:21:58.568 16:37:16 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:21:58.568 16:37:16 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:21:58.568 16:37:16 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:21:58.568 16:37:16 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:21:58.827 16:37:16 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:21:58.827 16:37:16 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:21:58.827 16:37:16 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:21:58.827 16:37:16 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:21:58.827 16:37:16 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:21:58.827 16:37:16 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:21:58.827 16:37:16 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:58.827 16:37:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:58.827 16:37:17 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:21:58.827 16:37:17 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:58.827 16:37:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:58.827 16:37:17 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=97690 00:21:58.827 16:37:17 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:58.827 16:37:17 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:58.827 16:37:17 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 97690 00:21:58.827 16:37:17 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 97690 ']' 00:21:58.827 16:37:17 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.827 16:37:17 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:58.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.827 16:37:17 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.827 16:37:17 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:58.827 16:37:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:59.085 [2024-07-21 16:37:17.087840] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:21:59.085 [2024-07-21 16:37:17.087953] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.085 [2024-07-21 16:37:17.227258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:59.343 [2024-07-21 16:37:17.341655] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.343 [2024-07-21 16:37:17.341737] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.343 [2024-07-21 16:37:17.341764] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.343 [2024-07-21 16:37:17.341771] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.343 [2024-07-21 16:37:17.341778] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.343 [2024-07-21 16:37:17.341936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.343 [2024-07-21 16:37:17.342050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:59.343 [2024-07-21 16:37:17.342762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:59.343 [2024-07-21 16:37:17.342824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.908 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:59.908 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:21:59.908 16:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:21:59.908 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.908 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:21:59.908 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.908 16:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:21:59.908 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.908 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:00.166 [2024-07-21 16:37:18.180419] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:22:00.166 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.166 16:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:00.166 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.166 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:00.166 [2024-07-21 16:37:18.194429] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.166 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.166 16:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:22:00.166 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:00.166 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:00.166 16:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:22:00.166 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.166 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:00.166 Nvme0n1 00:22:00.166 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.166 16:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:22:00.166 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.166 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:00.166 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.166 16:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:00.166 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.166 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:00.166 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.166 16:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:00.166 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.166 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:00.166 [2024-07-21 16:37:18.359598] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.166 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.166 16:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:22:00.167 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.167 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:00.424 [ 00:22:00.424 { 00:22:00.424 "allow_any_host": true, 00:22:00.424 "hosts": [], 00:22:00.424 "listen_addresses": [], 00:22:00.424 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:00.424 "subtype": "Discovery" 00:22:00.424 }, 00:22:00.424 { 00:22:00.424 "allow_any_host": true, 00:22:00.424 "hosts": [], 00:22:00.424 "listen_addresses": [ 00:22:00.424 { 00:22:00.424 "adrfam": "IPv4", 00:22:00.424 "traddr": "10.0.0.2", 00:22:00.424 "trsvcid": "4420", 00:22:00.424 "trtype": "TCP" 00:22:00.424 } 00:22:00.424 ], 00:22:00.424 "max_cntlid": 65519, 00:22:00.424 "max_namespaces": 1, 00:22:00.424 "min_cntlid": 1, 00:22:00.424 "model_number": "SPDK bdev Controller", 00:22:00.424 "namespaces": [ 00:22:00.424 { 00:22:00.424 "bdev_name": "Nvme0n1", 00:22:00.424 "name": "Nvme0n1", 00:22:00.424 "nguid": "B5E928390C3A4DEB98DEF22B2023BD63", 00:22:00.424 "nsid": 1, 00:22:00.424 "uuid": "b5e92839-0c3a-4deb-98de-f22b2023bd63" 00:22:00.424 } 00:22:00.424 ], 00:22:00.424 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.424 "serial_number": "SPDK00000000000001", 00:22:00.424 "subtype": "NVMe" 00:22:00.424 } 00:22:00.424 ] 00:22:00.424 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.424 16:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:00.424 16:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:22:00.424 16:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:22:00.424 16:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:22:00.424 16:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:22:00.424 16:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:00.424 16:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:22:00.681 16:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:22:00.682 16:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:22:00.682 16:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:22:00.682 16:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:00.682 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.682 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:00.682 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.682 16:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:22:00.682 16:37:18 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:22:00.682 16:37:18 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:00.682 16:37:18 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:22:00.682 16:37:18 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:00.682 16:37:18 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:22:00.682 16:37:18 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:00.682 16:37:18 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:00.682 rmmod nvme_tcp 00:22:00.682 rmmod nvme_fabrics 00:22:00.940 rmmod nvme_keyring 00:22:00.940 16:37:18 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:00.940 16:37:18 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:22:00.940 16:37:18 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:22:00.940 16:37:18 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 97690 ']' 00:22:00.940 16:37:18 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 97690 00:22:00.940 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 97690 ']' 00:22:00.940 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 97690 00:22:00.940 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:22:00.940 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:00.940 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97690 00:22:00.940 killing process with pid 97690 00:22:00.940 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:00.940 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:00.940 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97690' 00:22:00.940 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 97690 00:22:00.940 16:37:18 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 97690 00:22:01.198 16:37:19 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:01.199 16:37:19 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:01.199 16:37:19 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:01.199 16:37:19 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:01.199 16:37:19 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:01.199 16:37:19 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.199 16:37:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:01.199 16:37:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.199 16:37:19 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:01.199 ************************************ 00:22:01.199 END TEST nvmf_identify_passthru 00:22:01.199 ************************************ 00:22:01.199 00:22:01.199 real 0m3.150s 00:22:01.199 user 0m7.817s 00:22:01.199 sys 0m0.799s 00:22:01.199 16:37:19 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:01.199 16:37:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:01.199 16:37:19 -- common/autotest_common.sh@1142 -- # return 0 00:22:01.199 16:37:19 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:01.199 16:37:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:01.199 16:37:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:01.199 16:37:19 -- common/autotest_common.sh@10 -- # set +x 00:22:01.199 ************************************ 00:22:01.199 START TEST nvmf_dif 00:22:01.199 ************************************ 00:22:01.199 16:37:19 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:01.199 * Looking for test storage... 00:22:01.199 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:01.199 16:37:19 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:01.199 16:37:19 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.199 16:37:19 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.199 16:37:19 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.199 16:37:19 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.199 16:37:19 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.199 16:37:19 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.199 16:37:19 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:22:01.199 16:37:19 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:01.199 16:37:19 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:22:01.199 16:37:19 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:22:01.199 16:37:19 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:22:01.199 16:37:19 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:22:01.199 16:37:19 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.199 16:37:19 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:01.199 16:37:19 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:01.199 16:37:19 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:01.458 Cannot find device "nvmf_tgt_br" 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@155 -- # true 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:01.458 Cannot find device "nvmf_tgt_br2" 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@156 -- # true 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:01.458 Cannot find device "nvmf_tgt_br" 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@158 -- # true 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:01.458 Cannot find device "nvmf_tgt_br2" 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@159 -- # true 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:01.458 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@162 -- # true 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:01.458 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@163 -- # true 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:01.458 16:37:19 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:01.717 16:37:19 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:01.717 16:37:19 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:01.717 16:37:19 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:01.717 16:37:19 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:01.717 16:37:19 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:01.717 16:37:19 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:01.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:01.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:22:01.717 00:22:01.717 --- 10.0.0.2 ping statistics --- 00:22:01.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.717 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:22:01.717 16:37:19 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:01.717 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:01.717 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:22:01.717 00:22:01.717 --- 10.0.0.3 ping statistics --- 00:22:01.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.717 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:22:01.717 16:37:19 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:01.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:01.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:22:01.717 00:22:01.717 --- 10.0.0.1 ping statistics --- 00:22:01.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.717 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:22:01.717 16:37:19 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:01.717 16:37:19 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:22:01.717 16:37:19 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:22:01.717 16:37:19 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:01.976 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:01.976 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:01.976 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:01.976 16:37:20 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:01.976 16:37:20 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:01.976 16:37:20 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:01.976 16:37:20 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:01.976 16:37:20 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:01.976 16:37:20 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:01.976 16:37:20 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:22:01.976 16:37:20 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:22:01.976 16:37:20 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:01.976 16:37:20 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:01.976 16:37:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:01.976 16:37:20 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=98038 00:22:01.976 16:37:20 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:01.976 16:37:20 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 98038 00:22:01.976 16:37:20 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 98038 ']' 00:22:01.976 16:37:20 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.976 16:37:20 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:01.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.976 16:37:20 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.976 16:37:20 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:01.976 16:37:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:02.234 [2024-07-21 16:37:20.210622] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:22:02.234 [2024-07-21 16:37:20.210721] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.234 [2024-07-21 16:37:20.352449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.492 [2024-07-21 16:37:20.445946] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.492 [2024-07-21 16:37:20.446008] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.492 [2024-07-21 16:37:20.446023] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.492 [2024-07-21 16:37:20.446033] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.492 [2024-07-21 16:37:20.446042] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.492 [2024-07-21 16:37:20.446071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.058 16:37:21 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:03.058 16:37:21 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:22:03.058 16:37:21 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:03.058 16:37:21 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:03.058 16:37:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:03.058 16:37:21 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.058 16:37:21 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:22:03.058 16:37:21 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:22:03.058 16:37:21 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.058 16:37:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:03.317 [2024-07-21 16:37:21.273342] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.317 16:37:21 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.317 16:37:21 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:22:03.317 16:37:21 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:03.317 16:37:21 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:03.317 16:37:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:03.317 ************************************ 00:22:03.317 START TEST fio_dif_1_default 00:22:03.317 ************************************ 00:22:03.317 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:22:03.317 16:37:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:22:03.317 16:37:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:22:03.317 16:37:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:22:03.317 16:37:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:22:03.317 16:37:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:22:03.317 16:37:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:03.317 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.317 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:03.317 bdev_null0 00:22:03.317 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.317 16:37:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:03.317 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.317 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:03.317 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.317 16:37:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:03.317 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.317 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:03.317 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.317 16:37:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:03.317 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.317 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:03.317 [2024-07-21 16:37:21.317444] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.317 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.317 16:37:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:22:03.317 16:37:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:03.318 { 00:22:03.318 "params": { 00:22:03.318 "name": "Nvme$subsystem", 00:22:03.318 "trtype": "$TEST_TRANSPORT", 00:22:03.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.318 "adrfam": "ipv4", 00:22:03.318 "trsvcid": "$NVMF_PORT", 00:22:03.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.318 "hdgst": ${hdgst:-false}, 00:22:03.318 "ddgst": ${ddgst:-false} 00:22:03.318 }, 00:22:03.318 "method": "bdev_nvme_attach_controller" 00:22:03.318 } 00:22:03.318 EOF 00:22:03.318 )") 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:03.318 "params": { 00:22:03.318 "name": "Nvme0", 00:22:03.318 "trtype": "tcp", 00:22:03.318 "traddr": "10.0.0.2", 00:22:03.318 "adrfam": "ipv4", 00:22:03.318 "trsvcid": "4420", 00:22:03.318 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:03.318 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:03.318 "hdgst": false, 00:22:03.318 "ddgst": false 00:22:03.318 }, 00:22:03.318 "method": "bdev_nvme_attach_controller" 00:22:03.318 }' 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:03.318 16:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:03.577 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:03.577 fio-3.35 00:22:03.577 Starting 1 thread 00:22:15.767 00:22:15.767 filename0: (groupid=0, jobs=1): err= 0: pid=98123: Sun Jul 21 16:37:32 2024 00:22:15.767 read: IOPS=1387, BW=5549KiB/s (5682kB/s)(54.2MiB/10008msec) 00:22:15.767 slat (nsec): min=5831, max=47464, avg=7641.14, stdev=3010.69 00:22:15.767 clat (usec): min=373, max=42496, avg=2859.72, stdev=9578.25 00:22:15.767 lat (usec): min=379, max=42506, avg=2867.36, stdev=9578.34 00:22:15.767 clat percentiles (usec): 00:22:15.767 | 1.00th=[ 388], 5.00th=[ 400], 10.00th=[ 408], 20.00th=[ 420], 00:22:15.767 | 30.00th=[ 429], 40.00th=[ 437], 50.00th=[ 445], 60.00th=[ 457], 00:22:15.767 | 70.00th=[ 474], 80.00th=[ 498], 90.00th=[ 545], 95.00th=[40633], 00:22:15.767 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:22:15.767 | 99.99th=[42730] 00:22:15.767 bw ( KiB/s): min= 576, max=10080, per=100.00%, avg=5658.95, stdev=3161.95, samples=19 00:22:15.767 iops : min= 144, max= 2520, avg=1414.74, stdev=790.49, samples=19 00:22:15.767 lat (usec) : 500=81.11%, 750=12.91%, 1000=0.01% 00:22:15.767 lat (msec) : 4=0.02%, 10=0.01%, 50=5.93% 00:22:15.767 cpu : usr=90.49%, sys=8.38%, ctx=31, majf=0, minf=9 00:22:15.767 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:15.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.767 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:15.767 issued rwts: total=13884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:15.767 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:15.767 00:22:15.767 Run status group 0 (all jobs): 00:22:15.767 READ: bw=5549KiB/s (5682kB/s), 5549KiB/s-5549KiB/s (5682kB/s-5682kB/s), io=54.2MiB (56.9MB), run=10008-10008msec 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.767 00:22:15.767 real 0m11.028s 00:22:15.767 user 0m9.708s 00:22:15.767 sys 0m1.121s 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:15.767 ************************************ 00:22:15.767 END TEST fio_dif_1_default 00:22:15.767 ************************************ 00:22:15.767 16:37:32 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:22:15.767 16:37:32 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:22:15.767 16:37:32 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:15.767 16:37:32 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:15.767 16:37:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:15.767 ************************************ 00:22:15.767 START TEST fio_dif_1_multi_subsystems 00:22:15.767 ************************************ 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:15.767 bdev_null0 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:15.767 [2024-07-21 16:37:32.399978] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:22:15.767 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:15.768 bdev_null1 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:15.768 { 00:22:15.768 "params": { 00:22:15.768 "name": "Nvme$subsystem", 00:22:15.768 "trtype": "$TEST_TRANSPORT", 00:22:15.768 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.768 "adrfam": "ipv4", 00:22:15.768 "trsvcid": "$NVMF_PORT", 00:22:15.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.768 "hdgst": ${hdgst:-false}, 00:22:15.768 "ddgst": ${ddgst:-false} 00:22:15.768 }, 00:22:15.768 "method": "bdev_nvme_attach_controller" 00:22:15.768 } 00:22:15.768 EOF 00:22:15.768 )") 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:15.768 { 00:22:15.768 "params": { 00:22:15.768 "name": "Nvme$subsystem", 00:22:15.768 "trtype": "$TEST_TRANSPORT", 00:22:15.768 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.768 "adrfam": "ipv4", 00:22:15.768 "trsvcid": "$NVMF_PORT", 00:22:15.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.768 "hdgst": ${hdgst:-false}, 00:22:15.768 "ddgst": ${ddgst:-false} 00:22:15.768 }, 00:22:15.768 "method": "bdev_nvme_attach_controller" 00:22:15.768 } 00:22:15.768 EOF 00:22:15.768 )") 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:15.768 "params": { 00:22:15.768 "name": "Nvme0", 00:22:15.768 "trtype": "tcp", 00:22:15.768 "traddr": "10.0.0.2", 00:22:15.768 "adrfam": "ipv4", 00:22:15.768 "trsvcid": "4420", 00:22:15.768 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:15.768 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:15.768 "hdgst": false, 00:22:15.768 "ddgst": false 00:22:15.768 }, 00:22:15.768 "method": "bdev_nvme_attach_controller" 00:22:15.768 },{ 00:22:15.768 "params": { 00:22:15.768 "name": "Nvme1", 00:22:15.768 "trtype": "tcp", 00:22:15.768 "traddr": "10.0.0.2", 00:22:15.768 "adrfam": "ipv4", 00:22:15.768 "trsvcid": "4420", 00:22:15.768 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.768 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:15.768 "hdgst": false, 00:22:15.768 "ddgst": false 00:22:15.768 }, 00:22:15.768 "method": "bdev_nvme_attach_controller" 00:22:15.768 }' 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:15.768 16:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:15.768 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:15.768 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:15.768 fio-3.35 00:22:15.768 Starting 2 threads 00:22:25.724 00:22:25.724 filename0: (groupid=0, jobs=1): err= 0: pid=98282: Sun Jul 21 16:37:43 2024 00:22:25.724 read: IOPS=199, BW=799KiB/s (818kB/s)(8000KiB/10010msec) 00:22:25.724 slat (nsec): min=6001, max=57771, avg=9759.73, stdev=5981.45 00:22:25.724 clat (usec): min=357, max=42520, avg=19988.51, stdev=20246.77 00:22:25.724 lat (usec): min=363, max=42530, avg=19998.27, stdev=20246.78 00:22:25.724 clat percentiles (usec): 00:22:25.724 | 1.00th=[ 383], 5.00th=[ 404], 10.00th=[ 416], 20.00th=[ 437], 00:22:25.724 | 30.00th=[ 457], 40.00th=[ 494], 50.00th=[ 562], 60.00th=[40633], 00:22:25.724 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:22:25.724 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:22:25.724 | 99.99th=[42730] 00:22:25.724 bw ( KiB/s): min= 480, max= 1088, per=53.90%, avg=798.40, stdev=186.73, samples=20 00:22:25.724 iops : min= 120, max= 272, avg=199.60, stdev=46.68, samples=20 00:22:25.724 lat (usec) : 500=41.35%, 750=10.05% 00:22:25.724 lat (msec) : 2=0.40%, 50=48.20% 00:22:25.724 cpu : usr=97.40%, sys=2.16%, ctx=167, majf=0, minf=9 00:22:25.724 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:25.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:25.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:25.724 issued rwts: total=2000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:25.724 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:25.724 filename1: (groupid=0, jobs=1): err= 0: pid=98283: Sun Jul 21 16:37:43 2024 00:22:25.724 read: IOPS=170, BW=682KiB/s (698kB/s)(6832KiB/10018msec) 00:22:25.724 slat (nsec): min=5992, max=55309, avg=9245.71, stdev=5539.67 00:22:25.724 clat (usec): min=379, max=42469, avg=23431.22, stdev=20085.69 00:22:25.724 lat (usec): min=385, max=42479, avg=23440.47, stdev=20085.86 00:22:25.724 clat percentiles (usec): 00:22:25.724 | 1.00th=[ 400], 5.00th=[ 416], 10.00th=[ 429], 20.00th=[ 453], 00:22:25.724 | 30.00th=[ 482], 40.00th=[ 545], 50.00th=[40633], 60.00th=[41157], 00:22:25.724 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:22:25.724 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42730], 00:22:25.724 | 99.99th=[42730] 00:22:25.724 bw ( KiB/s): min= 512, max= 1056, per=46.00%, avg=681.60, stdev=142.77, samples=20 00:22:25.724 iops : min= 128, max= 264, avg=170.40, stdev=35.69, samples=20 00:22:25.724 lat (usec) : 500=35.83%, 750=7.03%, 1000=0.23% 00:22:25.724 lat (msec) : 2=0.23%, 50=56.67% 00:22:25.724 cpu : usr=97.48%, sys=2.06%, ctx=15, majf=0, minf=0 00:22:25.724 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:25.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:25.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:25.724 issued rwts: total=1708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:25.724 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:25.724 00:22:25.724 Run status group 0 (all jobs): 00:22:25.724 READ: bw=1481KiB/s (1516kB/s), 682KiB/s-799KiB/s (698kB/s-818kB/s), io=14.5MiB (15.2MB), run=10010-10018msec 00:22:25.724 16:37:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:22:25.724 16:37:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:22:25.724 16:37:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:22:25.724 16:37:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:25.724 16:37:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:22:25.724 16:37:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:25.724 16:37:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.724 16:37:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:25.724 16:37:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.724 16:37:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:25.724 16:37:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.724 16:37:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:25.724 16:37:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.724 16:37:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:22:25.724 16:37:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:25.724 16:37:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:22:25.724 16:37:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:25.724 16:37:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.724 16:37:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:25.724 16:37:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.724 16:37:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:25.724 16:37:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.724 16:37:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:25.724 16:37:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.724 00:22:25.724 real 0m11.289s 00:22:25.724 user 0m20.398s 00:22:25.724 sys 0m0.731s 00:22:25.724 16:37:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:25.724 ************************************ 00:22:25.724 16:37:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:25.724 END TEST fio_dif_1_multi_subsystems 00:22:25.724 ************************************ 00:22:25.724 16:37:43 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:22:25.724 16:37:43 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:22:25.724 16:37:43 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:25.724 16:37:43 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:25.724 16:37:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:25.724 ************************************ 00:22:25.724 START TEST fio_dif_rand_params 00:22:25.724 ************************************ 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:25.724 bdev_null0 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:25.724 [2024-07-21 16:37:43.747240] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:25.724 16:37:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:25.725 16:37:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:25.725 { 00:22:25.725 "params": { 00:22:25.725 "name": "Nvme$subsystem", 00:22:25.725 "trtype": "$TEST_TRANSPORT", 00:22:25.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:25.725 "adrfam": "ipv4", 00:22:25.725 "trsvcid": "$NVMF_PORT", 00:22:25.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:25.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:25.725 "hdgst": ${hdgst:-false}, 00:22:25.725 "ddgst": ${ddgst:-false} 00:22:25.725 }, 00:22:25.725 "method": "bdev_nvme_attach_controller" 00:22:25.725 } 00:22:25.725 EOF 00:22:25.725 )") 00:22:25.725 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:25.725 16:37:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:25.725 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:25.725 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:25.725 16:37:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:25.725 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:25.725 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:22:25.725 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:25.725 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:25.725 16:37:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:25.725 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:25.725 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:22:25.725 16:37:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:25.725 16:37:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:25.725 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:25.725 16:37:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:22:25.725 16:37:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:22:25.725 16:37:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:25.725 "params": { 00:22:25.725 "name": "Nvme0", 00:22:25.725 "trtype": "tcp", 00:22:25.725 "traddr": "10.0.0.2", 00:22:25.725 "adrfam": "ipv4", 00:22:25.725 "trsvcid": "4420", 00:22:25.725 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:25.725 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:25.725 "hdgst": false, 00:22:25.725 "ddgst": false 00:22:25.725 }, 00:22:25.725 "method": "bdev_nvme_attach_controller" 00:22:25.725 }' 00:22:25.725 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:25.725 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:25.725 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:25.725 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:25.725 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:25.725 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:25.725 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:25.725 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:25.725 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:25.725 16:37:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:25.982 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:25.982 ... 00:22:25.982 fio-3.35 00:22:25.982 Starting 3 threads 00:22:32.550 00:22:32.550 filename0: (groupid=0, jobs=1): err= 0: pid=98439: Sun Jul 21 16:37:49 2024 00:22:32.550 read: IOPS=246, BW=30.8MiB/s (32.3MB/s)(154MiB/5004msec) 00:22:32.550 slat (nsec): min=6277, max=55430, avg=9848.79, stdev=5422.19 00:22:32.550 clat (usec): min=3724, max=16900, avg=12149.89, stdev=3337.15 00:22:32.550 lat (usec): min=3731, max=16911, avg=12159.74, stdev=3337.27 00:22:32.550 clat percentiles (usec): 00:22:32.550 | 1.00th=[ 3785], 5.00th=[ 3884], 10.00th=[ 7701], 20.00th=[ 8717], 00:22:32.550 | 30.00th=[12256], 40.00th=[13042], 50.00th=[13435], 60.00th=[13829], 00:22:32.550 | 70.00th=[14222], 80.00th=[14615], 90.00th=[15008], 95.00th=[15664], 00:22:32.550 | 99.00th=[16450], 99.50th=[16450], 99.90th=[16909], 99.95th=[16909], 00:22:32.550 | 99.99th=[16909] 00:22:32.550 bw ( KiB/s): min=26112, max=45312, per=32.41%, avg=31493.50, stdev=6583.18, samples=10 00:22:32.550 iops : min= 204, max= 354, avg=246.00, stdev=51.46, samples=10 00:22:32.550 lat (msec) : 4=6.08%, 10=19.71%, 20=74.21% 00:22:32.550 cpu : usr=93.28%, sys=5.40%, ctx=6, majf=0, minf=0 00:22:32.550 IO depths : 1=32.9%, 2=67.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:32.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.550 issued rwts: total=1233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.550 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:32.550 filename0: (groupid=0, jobs=1): err= 0: pid=98440: Sun Jul 21 16:37:49 2024 00:22:32.550 read: IOPS=252, BW=31.6MiB/s (33.1MB/s)(158MiB/5005msec) 00:22:32.550 slat (nsec): min=6413, max=58576, avg=11761.34, stdev=5544.11 00:22:32.550 clat (usec): min=4850, max=54822, avg=11848.14, stdev=6082.75 00:22:32.550 lat (usec): min=4876, max=54832, avg=11859.90, stdev=6083.26 00:22:32.550 clat percentiles (usec): 00:22:32.550 | 1.00th=[ 5997], 5.00th=[ 7046], 10.00th=[ 7701], 20.00th=[10159], 00:22:32.550 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:22:32.550 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12911], 95.00th=[13566], 00:22:32.550 | 99.00th=[51643], 99.50th=[52691], 99.90th=[54789], 99.95th=[54789], 00:22:32.550 | 99.99th=[54789] 00:22:32.550 bw ( KiB/s): min=25344, max=35328, per=33.25%, avg=32313.80, stdev=2771.83, samples=10 00:22:32.550 iops : min= 198, max= 276, avg=252.40, stdev=21.64, samples=10 00:22:32.550 lat (msec) : 10=18.42%, 20=79.45%, 50=0.71%, 100=1.42% 00:22:32.550 cpu : usr=92.99%, sys=5.56%, ctx=64, majf=0, minf=0 00:22:32.550 IO depths : 1=7.7%, 2=92.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:32.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.550 issued rwts: total=1265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.550 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:32.550 filename0: (groupid=0, jobs=1): err= 0: pid=98441: Sun Jul 21 16:37:49 2024 00:22:32.550 read: IOPS=263, BW=32.9MiB/s (34.5MB/s)(166MiB/5036msec) 00:22:32.550 slat (nsec): min=6014, max=73208, avg=12133.18, stdev=5554.75 00:22:32.550 clat (usec): min=5374, max=53035, avg=11383.36, stdev=7440.25 00:22:32.550 lat (usec): min=5384, max=53045, avg=11395.49, stdev=7440.50 00:22:32.550 clat percentiles (usec): 00:22:32.550 | 1.00th=[ 6915], 5.00th=[ 7767], 10.00th=[ 8717], 20.00th=[ 9372], 00:22:32.550 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10421], 00:22:32.550 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11338], 95.00th=[11994], 00:22:32.550 | 99.00th=[51643], 99.50th=[52691], 99.90th=[53216], 99.95th=[53216], 00:22:32.550 | 99.99th=[53216] 00:22:32.550 bw ( KiB/s): min=23808, max=38400, per=34.83%, avg=33843.20, stdev=5290.39, samples=10 00:22:32.550 iops : min= 186, max= 300, avg=264.40, stdev=41.33, samples=10 00:22:32.550 lat (msec) : 10=42.19%, 20=54.42%, 50=1.13%, 100=2.26% 00:22:32.550 cpu : usr=93.29%, sys=5.22%, ctx=58, majf=0, minf=0 00:22:32.550 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:32.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.550 issued rwts: total=1325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.550 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:32.550 00:22:32.550 Run status group 0 (all jobs): 00:22:32.550 READ: bw=94.9MiB/s (99.5MB/s), 30.8MiB/s-32.9MiB/s (32.3MB/s-34.5MB/s), io=478MiB (501MB), run=5004-5036msec 00:22:32.550 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:22:32.550 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:32.550 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:32.550 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:32.550 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:32.550 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:32.550 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.550 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:32.550 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.550 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:32.550 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.550 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:32.550 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:32.551 bdev_null0 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:32.551 [2024-07-21 16:37:49.805107] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:32.551 bdev_null1 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:32.551 bdev_null2 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:32.551 { 00:22:32.551 "params": { 00:22:32.551 "name": "Nvme$subsystem", 00:22:32.551 "trtype": "$TEST_TRANSPORT", 00:22:32.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.551 "adrfam": "ipv4", 00:22:32.551 "trsvcid": "$NVMF_PORT", 00:22:32.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.551 "hdgst": ${hdgst:-false}, 00:22:32.551 "ddgst": ${ddgst:-false} 00:22:32.551 }, 00:22:32.551 "method": "bdev_nvme_attach_controller" 00:22:32.551 } 00:22:32.551 EOF 00:22:32.551 )") 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:32.551 { 00:22:32.551 "params": { 00:22:32.551 "name": "Nvme$subsystem", 00:22:32.551 "trtype": "$TEST_TRANSPORT", 00:22:32.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.551 "adrfam": "ipv4", 00:22:32.551 "trsvcid": "$NVMF_PORT", 00:22:32.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.551 "hdgst": ${hdgst:-false}, 00:22:32.551 "ddgst": ${ddgst:-false} 00:22:32.551 }, 00:22:32.551 "method": "bdev_nvme_attach_controller" 00:22:32.551 } 00:22:32.551 EOF 00:22:32.551 )") 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:32.551 { 00:22:32.551 "params": { 00:22:32.551 "name": "Nvme$subsystem", 00:22:32.551 "trtype": "$TEST_TRANSPORT", 00:22:32.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.551 "adrfam": "ipv4", 00:22:32.551 "trsvcid": "$NVMF_PORT", 00:22:32.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.551 "hdgst": ${hdgst:-false}, 00:22:32.551 "ddgst": ${ddgst:-false} 00:22:32.551 }, 00:22:32.551 "method": "bdev_nvme_attach_controller" 00:22:32.551 } 00:22:32.551 EOF 00:22:32.551 )") 00:22:32.551 16:37:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:32.552 16:37:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:32.552 16:37:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:22:32.552 16:37:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:22:32.552 16:37:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:32.552 "params": { 00:22:32.552 "name": "Nvme0", 00:22:32.552 "trtype": "tcp", 00:22:32.552 "traddr": "10.0.0.2", 00:22:32.552 "adrfam": "ipv4", 00:22:32.552 "trsvcid": "4420", 00:22:32.552 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:32.552 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:32.552 "hdgst": false, 00:22:32.552 "ddgst": false 00:22:32.552 }, 00:22:32.552 "method": "bdev_nvme_attach_controller" 00:22:32.552 },{ 00:22:32.552 "params": { 00:22:32.552 "name": "Nvme1", 00:22:32.552 "trtype": "tcp", 00:22:32.552 "traddr": "10.0.0.2", 00:22:32.552 "adrfam": "ipv4", 00:22:32.552 "trsvcid": "4420", 00:22:32.552 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.552 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:32.552 "hdgst": false, 00:22:32.552 "ddgst": false 00:22:32.552 }, 00:22:32.552 "method": "bdev_nvme_attach_controller" 00:22:32.552 },{ 00:22:32.552 "params": { 00:22:32.552 "name": "Nvme2", 00:22:32.552 "trtype": "tcp", 00:22:32.552 "traddr": "10.0.0.2", 00:22:32.552 "adrfam": "ipv4", 00:22:32.552 "trsvcid": "4420", 00:22:32.552 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:32.552 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:32.552 "hdgst": false, 00:22:32.552 "ddgst": false 00:22:32.552 }, 00:22:32.552 "method": "bdev_nvme_attach_controller" 00:22:32.552 }' 00:22:32.552 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:32.552 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:32.552 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:32.552 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:32.552 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:32.552 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:32.552 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:32.552 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:32.552 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:32.552 16:37:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:32.552 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:32.552 ... 00:22:32.552 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:32.552 ... 00:22:32.552 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:32.552 ... 00:22:32.552 fio-3.35 00:22:32.552 Starting 24 threads 00:22:44.749 00:22:44.749 filename0: (groupid=0, jobs=1): err= 0: pid=98540: Sun Jul 21 16:38:00 2024 00:22:44.749 read: IOPS=203, BW=814KiB/s (833kB/s)(8136KiB/10001msec) 00:22:44.749 slat (usec): min=6, max=8051, avg=33.14, stdev=397.37 00:22:44.749 clat (msec): min=34, max=155, avg=78.40, stdev=20.54 00:22:44.749 lat (msec): min=34, max=155, avg=78.44, stdev=20.55 00:22:44.749 clat percentiles (msec): 00:22:44.749 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 62], 00:22:44.749 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 81], 00:22:44.749 | 70.00th=[ 86], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 117], 00:22:44.749 | 99.00th=[ 140], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:22:44.749 | 99.99th=[ 157] 00:22:44.749 bw ( KiB/s): min= 640, max= 944, per=3.72%, avg=815.79, stdev=86.66, samples=19 00:22:44.749 iops : min= 160, max= 236, avg=203.89, stdev=21.70, samples=19 00:22:44.749 lat (msec) : 50=7.03%, 100=75.96%, 250=17.01% 00:22:44.749 cpu : usr=32.61%, sys=0.55%, ctx=908, majf=0, minf=9 00:22:44.749 IO depths : 1=2.3%, 2=5.4%, 4=15.5%, 8=66.1%, 16=10.7%, 32=0.0%, >=64=0.0% 00:22:44.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.749 complete : 0=0.0%, 4=91.5%, 8=3.2%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.749 issued rwts: total=2034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:44.749 filename0: (groupid=0, jobs=1): err= 0: pid=98541: Sun Jul 21 16:38:00 2024 00:22:44.749 read: IOPS=216, BW=864KiB/s (885kB/s)(8660KiB/10020msec) 00:22:44.749 slat (usec): min=4, max=8045, avg=30.37, stdev=354.94 00:22:44.749 clat (msec): min=32, max=168, avg=73.86, stdev=22.46 00:22:44.749 lat (msec): min=32, max=168, avg=73.89, stdev=22.46 00:22:44.749 clat percentiles (msec): 00:22:44.749 | 1.00th=[ 36], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 58], 00:22:44.749 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 73], 00:22:44.749 | 70.00th=[ 83], 80.00th=[ 93], 90.00th=[ 101], 95.00th=[ 115], 00:22:44.749 | 99.00th=[ 153], 99.50th=[ 153], 99.90th=[ 169], 99.95th=[ 169], 00:22:44.749 | 99.99th=[ 169] 00:22:44.749 bw ( KiB/s): min= 512, max= 1200, per=3.92%, avg=859.45, stdev=138.17, samples=20 00:22:44.749 iops : min= 128, max= 300, avg=214.85, stdev=34.53, samples=20 00:22:44.749 lat (msec) : 50=14.46%, 100=75.66%, 250=9.88% 00:22:44.749 cpu : usr=32.58%, sys=0.61%, ctx=868, majf=0, minf=9 00:22:44.749 IO depths : 1=1.1%, 2=2.7%, 4=10.3%, 8=73.1%, 16=12.8%, 32=0.0%, >=64=0.0% 00:22:44.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.749 complete : 0=0.0%, 4=90.3%, 8=5.5%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.749 issued rwts: total=2165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:44.749 filename0: (groupid=0, jobs=1): err= 0: pid=98542: Sun Jul 21 16:38:00 2024 00:22:44.749 read: IOPS=203, BW=815KiB/s (835kB/s)(8176KiB/10030msec) 00:22:44.749 slat (usec): min=4, max=8021, avg=21.57, stdev=250.52 00:22:44.749 clat (msec): min=34, max=178, avg=78.39, stdev=22.78 00:22:44.749 lat (msec): min=34, max=178, avg=78.41, stdev=22.79 00:22:44.749 clat percentiles (msec): 00:22:44.749 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 61], 00:22:44.749 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 81], 00:22:44.749 | 70.00th=[ 86], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:22:44.749 | 99.00th=[ 155], 99.50th=[ 178], 99.90th=[ 180], 99.95th=[ 180], 00:22:44.749 | 99.99th=[ 180] 00:22:44.749 bw ( KiB/s): min= 560, max= 1024, per=3.70%, avg=811.10, stdev=113.06, samples=20 00:22:44.749 iops : min= 140, max= 256, avg=202.75, stdev=28.28, samples=20 00:22:44.749 lat (msec) : 50=9.00%, 100=74.85%, 250=16.14% 00:22:44.749 cpu : usr=33.72%, sys=0.68%, ctx=918, majf=0, minf=9 00:22:44.749 IO depths : 1=1.8%, 2=4.3%, 4=14.0%, 8=68.4%, 16=11.5%, 32=0.0%, >=64=0.0% 00:22:44.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.749 complete : 0=0.0%, 4=90.9%, 8=4.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.749 issued rwts: total=2044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:44.749 filename0: (groupid=0, jobs=1): err= 0: pid=98543: Sun Jul 21 16:38:00 2024 00:22:44.749 read: IOPS=197, BW=789KiB/s (808kB/s)(7896KiB/10006msec) 00:22:44.749 slat (usec): min=4, max=8020, avg=24.09, stdev=270.27 00:22:44.749 clat (msec): min=35, max=147, avg=80.91, stdev=20.08 00:22:44.749 lat (msec): min=35, max=147, avg=80.94, stdev=20.08 00:22:44.749 clat percentiles (msec): 00:22:44.749 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 65], 00:22:44.749 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 85], 00:22:44.749 | 70.00th=[ 92], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 117], 00:22:44.749 | 99.00th=[ 132], 99.50th=[ 148], 99.90th=[ 148], 99.95th=[ 148], 00:22:44.749 | 99.99th=[ 148] 00:22:44.749 bw ( KiB/s): min= 640, max= 1024, per=3.60%, avg=788.26, stdev=95.58, samples=19 00:22:44.749 iops : min= 160, max= 256, avg=197.00, stdev=23.90, samples=19 00:22:44.749 lat (msec) : 50=7.29%, 100=76.09%, 250=16.62% 00:22:44.749 cpu : usr=33.75%, sys=0.76%, ctx=927, majf=0, minf=9 00:22:44.749 IO depths : 1=2.1%, 2=5.1%, 4=15.5%, 8=66.4%, 16=10.9%, 32=0.0%, >=64=0.0% 00:22:44.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.749 complete : 0=0.0%, 4=91.5%, 8=3.3%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.749 issued rwts: total=1974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:44.749 filename0: (groupid=0, jobs=1): err= 0: pid=98544: Sun Jul 21 16:38:00 2024 00:22:44.749 read: IOPS=207, BW=830KiB/s (850kB/s)(8304KiB/10004msec) 00:22:44.749 slat (usec): min=6, max=8031, avg=21.33, stdev=248.80 00:22:44.749 clat (msec): min=4, max=174, avg=76.96, stdev=22.07 00:22:44.749 lat (msec): min=4, max=174, avg=76.98, stdev=22.08 00:22:44.749 clat percentiles (msec): 00:22:44.749 | 1.00th=[ 38], 5.00th=[ 47], 10.00th=[ 55], 20.00th=[ 62], 00:22:44.749 | 30.00th=[ 67], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 78], 00:22:44.749 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 105], 95.00th=[ 117], 00:22:44.749 | 99.00th=[ 148], 99.50th=[ 161], 99.90th=[ 176], 99.95th=[ 176], 00:22:44.749 | 99.99th=[ 176] 00:22:44.749 bw ( KiB/s): min= 560, max= 1024, per=3.77%, avg=826.89, stdev=116.64, samples=19 00:22:44.749 iops : min= 140, max= 256, avg=206.68, stdev=29.15, samples=19 00:22:44.749 lat (msec) : 10=0.77%, 50=7.03%, 100=77.75%, 250=14.45% 00:22:44.749 cpu : usr=41.85%, sys=0.82%, ctx=1173, majf=0, minf=9 00:22:44.749 IO depths : 1=1.9%, 2=4.8%, 4=15.0%, 8=66.9%, 16=11.4%, 32=0.0%, >=64=0.0% 00:22:44.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.749 complete : 0=0.0%, 4=91.4%, 8=3.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.749 issued rwts: total=2076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:44.749 filename0: (groupid=0, jobs=1): err= 0: pid=98545: Sun Jul 21 16:38:00 2024 00:22:44.749 read: IOPS=272, BW=1091KiB/s (1118kB/s)(10.7MiB/10053msec) 00:22:44.749 slat (usec): min=4, max=8019, avg=20.41, stdev=212.94 00:22:44.749 clat (msec): min=17, max=127, avg=58.44, stdev=16.52 00:22:44.749 lat (msec): min=17, max=127, avg=58.46, stdev=16.52 00:22:44.749 clat percentiles (msec): 00:22:44.749 | 1.00th=[ 31], 5.00th=[ 39], 10.00th=[ 41], 20.00th=[ 44], 00:22:44.749 | 30.00th=[ 47], 40.00th=[ 50], 50.00th=[ 57], 60.00th=[ 62], 00:22:44.749 | 70.00th=[ 68], 80.00th=[ 72], 90.00th=[ 82], 95.00th=[ 90], 00:22:44.749 | 99.00th=[ 104], 99.50th=[ 115], 99.90th=[ 128], 99.95th=[ 128], 00:22:44.749 | 99.99th=[ 128] 00:22:44.749 bw ( KiB/s): min= 849, max= 1328, per=4.97%, avg=1090.45, stdev=137.47, samples=20 00:22:44.749 iops : min= 212, max= 332, avg=272.60, stdev=34.39, samples=20 00:22:44.749 lat (msec) : 20=0.58%, 50=39.99%, 100=58.22%, 250=1.20% 00:22:44.749 cpu : usr=42.91%, sys=0.74%, ctx=1508, majf=0, minf=9 00:22:44.749 IO depths : 1=0.8%, 2=1.6%, 4=8.5%, 8=76.6%, 16=12.5%, 32=0.0%, >=64=0.0% 00:22:44.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.749 complete : 0=0.0%, 4=89.3%, 8=5.9%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.749 issued rwts: total=2743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.749 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:44.749 filename0: (groupid=0, jobs=1): err= 0: pid=98546: Sun Jul 21 16:38:00 2024 00:22:44.749 read: IOPS=221, BW=885KiB/s (906kB/s)(8888KiB/10043msec) 00:22:44.749 slat (usec): min=6, max=8024, avg=17.37, stdev=170.17 00:22:44.749 clat (msec): min=32, max=151, avg=72.21, stdev=19.83 00:22:44.750 lat (msec): min=32, max=151, avg=72.23, stdev=19.83 00:22:44.750 clat percentiles (msec): 00:22:44.750 | 1.00th=[ 35], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 57], 00:22:44.750 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 70], 60.00th=[ 72], 00:22:44.750 | 70.00th=[ 81], 80.00th=[ 86], 90.00th=[ 102], 95.00th=[ 108], 00:22:44.750 | 99.00th=[ 122], 99.50th=[ 129], 99.90th=[ 153], 99.95th=[ 153], 00:22:44.750 | 99.99th=[ 153] 00:22:44.750 bw ( KiB/s): min= 677, max= 1080, per=4.02%, avg=882.25, stdev=106.97, samples=20 00:22:44.750 iops : min= 169, max= 270, avg=220.55, stdev=26.77, samples=20 00:22:44.750 lat (msec) : 50=14.00%, 100=75.92%, 250=10.08% 00:22:44.750 cpu : usr=32.56%, sys=0.60%, ctx=922, majf=0, minf=9 00:22:44.750 IO depths : 1=1.5%, 2=3.5%, 4=11.8%, 8=71.5%, 16=11.7%, 32=0.0%, >=64=0.0% 00:22:44.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.750 complete : 0=0.0%, 4=90.3%, 8=4.8%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.750 issued rwts: total=2222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.750 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:44.750 filename0: (groupid=0, jobs=1): err= 0: pid=98547: Sun Jul 21 16:38:00 2024 00:22:44.750 read: IOPS=238, BW=954KiB/s (977kB/s)(9592KiB/10053msec) 00:22:44.750 slat (usec): min=4, max=8047, avg=21.94, stdev=283.61 00:22:44.750 clat (msec): min=23, max=142, avg=66.84, stdev=18.80 00:22:44.750 lat (msec): min=23, max=142, avg=66.86, stdev=18.81 00:22:44.750 clat percentiles (msec): 00:22:44.750 | 1.00th=[ 35], 5.00th=[ 40], 10.00th=[ 46], 20.00th=[ 49], 00:22:44.750 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 70], 00:22:44.750 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 95], 95.00th=[ 105], 00:22:44.750 | 99.00th=[ 129], 99.50th=[ 131], 99.90th=[ 144], 99.95th=[ 144], 00:22:44.750 | 99.99th=[ 144] 00:22:44.750 bw ( KiB/s): min= 768, max= 1328, per=4.34%, avg=952.30, stdev=165.26, samples=20 00:22:44.750 iops : min= 192, max= 332, avg=238.05, stdev=41.28, samples=20 00:22:44.750 lat (msec) : 50=22.56%, 100=70.31%, 250=7.13% 00:22:44.750 cpu : usr=33.57%, sys=0.59%, ctx=1042, majf=0, minf=9 00:22:44.750 IO depths : 1=0.9%, 2=1.9%, 4=9.4%, 8=75.2%, 16=12.6%, 32=0.0%, >=64=0.0% 00:22:44.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.750 complete : 0=0.0%, 4=89.6%, 8=5.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.750 issued rwts: total=2398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.750 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:44.750 filename1: (groupid=0, jobs=1): err= 0: pid=98548: Sun Jul 21 16:38:00 2024 00:22:44.750 read: IOPS=226, BW=905KiB/s (926kB/s)(9092KiB/10049msec) 00:22:44.750 slat (usec): min=4, max=8057, avg=20.65, stdev=238.19 00:22:44.750 clat (msec): min=21, max=143, avg=70.49, stdev=19.87 00:22:44.750 lat (msec): min=21, max=143, avg=70.51, stdev=19.88 00:22:44.750 clat percentiles (msec): 00:22:44.750 | 1.00th=[ 35], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 57], 00:22:44.750 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 72], 00:22:44.750 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 109], 00:22:44.750 | 99.00th=[ 133], 99.50th=[ 134], 99.90th=[ 144], 99.95th=[ 144], 00:22:44.750 | 99.99th=[ 144] 00:22:44.750 bw ( KiB/s): min= 752, max= 1120, per=4.12%, avg=904.95, stdev=111.81, samples=20 00:22:44.750 iops : min= 188, max= 280, avg=226.20, stdev=27.93, samples=20 00:22:44.750 lat (msec) : 50=15.57%, 100=77.74%, 250=6.69% 00:22:44.750 cpu : usr=32.58%, sys=0.60%, ctx=882, majf=0, minf=9 00:22:44.750 IO depths : 1=0.8%, 2=2.5%, 4=11.3%, 8=72.8%, 16=12.7%, 32=0.0%, >=64=0.0% 00:22:44.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.750 complete : 0=0.0%, 4=90.3%, 8=5.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.750 issued rwts: total=2273,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.750 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:44.750 filename1: (groupid=0, jobs=1): err= 0: pid=98549: Sun Jul 21 16:38:00 2024 00:22:44.750 read: IOPS=249, BW=999KiB/s (1023kB/s)(9.82MiB/10063msec) 00:22:44.750 slat (usec): min=6, max=8060, avg=23.26, stdev=277.39 00:22:44.750 clat (msec): min=2, max=155, avg=63.78, stdev=21.66 00:22:44.750 lat (msec): min=2, max=155, avg=63.80, stdev=21.67 00:22:44.750 clat percentiles (msec): 00:22:44.750 | 1.00th=[ 4], 5.00th=[ 37], 10.00th=[ 43], 20.00th=[ 48], 00:22:44.750 | 30.00th=[ 53], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 67], 00:22:44.750 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 92], 95.00th=[ 104], 00:22:44.750 | 99.00th=[ 129], 99.50th=[ 132], 99.90th=[ 157], 99.95th=[ 157], 00:22:44.750 | 99.99th=[ 157] 00:22:44.750 bw ( KiB/s): min= 714, max= 1664, per=4.55%, avg=998.90, stdev=192.32, samples=20 00:22:44.750 iops : min= 178, max= 416, avg=249.70, stdev=48.12, samples=20 00:22:44.750 lat (msec) : 4=1.19%, 10=1.35%, 20=0.64%, 50=22.28%, 100=69.41% 00:22:44.750 lat (msec) : 250=5.13% 00:22:44.750 cpu : usr=35.07%, sys=0.72%, ctx=1119, majf=0, minf=9 00:22:44.750 IO depths : 1=1.1%, 2=2.5%, 4=9.7%, 8=74.4%, 16=12.3%, 32=0.0%, >=64=0.0% 00:22:44.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.750 complete : 0=0.0%, 4=89.8%, 8=5.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.750 issued rwts: total=2514,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.750 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:44.750 filename1: (groupid=0, jobs=1): err= 0: pid=98550: Sun Jul 21 16:38:00 2024 00:22:44.750 read: IOPS=226, BW=904KiB/s (926kB/s)(9084KiB/10046msec) 00:22:44.750 slat (usec): min=5, max=7028, avg=19.32, stdev=184.07 00:22:44.750 clat (msec): min=31, max=177, avg=70.59, stdev=21.37 00:22:44.750 lat (msec): min=31, max=177, avg=70.61, stdev=21.37 00:22:44.750 clat percentiles (msec): 00:22:44.750 | 1.00th=[ 36], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 52], 00:22:44.750 | 30.00th=[ 62], 40.00th=[ 65], 50.00th=[ 68], 60.00th=[ 72], 00:22:44.750 | 70.00th=[ 78], 80.00th=[ 84], 90.00th=[ 103], 95.00th=[ 109], 00:22:44.750 | 99.00th=[ 144], 99.50th=[ 153], 99.90th=[ 163], 99.95th=[ 163], 00:22:44.750 | 99.99th=[ 178] 00:22:44.750 bw ( KiB/s): min= 688, max= 1120, per=4.11%, avg=901.55, stdev=121.73, samples=20 00:22:44.750 iops : min= 172, max= 280, avg=225.35, stdev=30.38, samples=20 00:22:44.750 lat (msec) : 50=18.01%, 100=71.73%, 250=10.26% 00:22:44.750 cpu : usr=41.49%, sys=0.68%, ctx=1239, majf=0, minf=9 00:22:44.750 IO depths : 1=2.3%, 2=5.0%, 4=14.0%, 8=67.7%, 16=11.0%, 32=0.0%, >=64=0.0% 00:22:44.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.750 complete : 0=0.0%, 4=91.0%, 8=4.1%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.750 issued rwts: total=2271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.750 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:44.750 filename1: (groupid=0, jobs=1): err= 0: pid=98551: Sun Jul 21 16:38:00 2024 00:22:44.750 read: IOPS=269, BW=1077KiB/s (1102kB/s)(10.6MiB/10061msec) 00:22:44.750 slat (usec): min=4, max=6568, avg=17.45, stdev=166.62 00:22:44.750 clat (msec): min=2, max=127, avg=59.12, stdev=20.64 00:22:44.750 lat (msec): min=3, max=127, avg=59.13, stdev=20.64 00:22:44.750 clat percentiles (msec): 00:22:44.750 | 1.00th=[ 4], 5.00th=[ 34], 10.00th=[ 40], 20.00th=[ 44], 00:22:44.750 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 57], 60.00th=[ 65], 00:22:44.750 | 70.00th=[ 69], 80.00th=[ 74], 90.00th=[ 87], 95.00th=[ 96], 00:22:44.750 | 99.00th=[ 111], 99.50th=[ 124], 99.90th=[ 125], 99.95th=[ 128], 00:22:44.750 | 99.99th=[ 128] 00:22:44.750 bw ( KiB/s): min= 768, max= 1920, per=4.93%, avg=1080.30, stdev=260.27, samples=20 00:22:44.750 iops : min= 192, max= 480, avg=270.05, stdev=65.06, samples=20 00:22:44.750 lat (msec) : 4=1.18%, 10=1.18%, 20=0.59%, 50=36.08%, 100=57.42% 00:22:44.750 lat (msec) : 250=3.55% 00:22:44.750 cpu : usr=44.29%, sys=0.99%, ctx=1192, majf=0, minf=9 00:22:44.750 IO depths : 1=1.2%, 2=2.7%, 4=9.2%, 8=74.8%, 16=12.2%, 32=0.0%, >=64=0.0% 00:22:44.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.750 complete : 0=0.0%, 4=90.1%, 8=5.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.750 issued rwts: total=2708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.750 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:44.750 filename1: (groupid=0, jobs=1): err= 0: pid=98552: Sun Jul 21 16:38:00 2024 00:22:44.750 read: IOPS=218, BW=874KiB/s (895kB/s)(8768KiB/10028msec) 00:22:44.750 slat (usec): min=5, max=8021, avg=21.13, stdev=241.98 00:22:44.750 clat (msec): min=31, max=158, avg=73.01, stdev=21.14 00:22:44.750 lat (msec): min=31, max=158, avg=73.03, stdev=21.14 00:22:44.750 clat percentiles (msec): 00:22:44.750 | 1.00th=[ 35], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 57], 00:22:44.750 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 71], 60.00th=[ 73], 00:22:44.750 | 70.00th=[ 81], 80.00th=[ 93], 90.00th=[ 104], 95.00th=[ 108], 00:22:44.750 | 99.00th=[ 131], 99.50th=[ 144], 99.90th=[ 159], 99.95th=[ 159], 00:22:44.750 | 99.99th=[ 159] 00:22:44.750 bw ( KiB/s): min= 640, max= 1072, per=3.97%, avg=870.35, stdev=123.57, samples=20 00:22:44.750 iops : min= 160, max= 268, avg=217.55, stdev=30.92, samples=20 00:22:44.750 lat (msec) : 50=13.69%, 100=75.18%, 250=11.13% 00:22:44.750 cpu : usr=33.91%, sys=0.56%, ctx=946, majf=0, minf=9 00:22:44.750 IO depths : 1=1.5%, 2=3.1%, 4=10.6%, 8=72.9%, 16=12.0%, 32=0.0%, >=64=0.0% 00:22:44.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.750 complete : 0=0.0%, 4=90.1%, 8=5.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.750 issued rwts: total=2192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.750 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:44.750 filename1: (groupid=0, jobs=1): err= 0: pid=98553: Sun Jul 21 16:38:00 2024 00:22:44.750 read: IOPS=241, BW=965KiB/s (988kB/s)(9692KiB/10047msec) 00:22:44.750 slat (usec): min=4, max=4031, avg=13.57, stdev=81.96 00:22:44.750 clat (msec): min=28, max=161, avg=66.12, stdev=19.86 00:22:44.750 lat (msec): min=28, max=161, avg=66.13, stdev=19.86 00:22:44.750 clat percentiles (msec): 00:22:44.750 | 1.00th=[ 36], 5.00th=[ 40], 10.00th=[ 43], 20.00th=[ 49], 00:22:44.750 | 30.00th=[ 55], 40.00th=[ 59], 50.00th=[ 66], 60.00th=[ 68], 00:22:44.750 | 70.00th=[ 74], 80.00th=[ 81], 90.00th=[ 92], 95.00th=[ 105], 00:22:44.750 | 99.00th=[ 124], 99.50th=[ 140], 99.90th=[ 161], 99.95th=[ 161], 00:22:44.750 | 99.99th=[ 161] 00:22:44.750 bw ( KiB/s): min= 744, max= 1248, per=4.40%, avg=965.60, stdev=124.10, samples=20 00:22:44.750 iops : min= 186, max= 312, avg=241.40, stdev=31.03, samples=20 00:22:44.750 lat (msec) : 50=22.70%, 100=71.23%, 250=6.07% 00:22:44.750 cpu : usr=43.70%, sys=0.71%, ctx=1475, majf=0, minf=9 00:22:44.750 IO depths : 1=1.7%, 2=3.7%, 4=11.1%, 8=72.1%, 16=11.3%, 32=0.0%, >=64=0.0% 00:22:44.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.750 complete : 0=0.0%, 4=90.4%, 8=4.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.750 issued rwts: total=2423,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.751 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:44.751 filename1: (groupid=0, jobs=1): err= 0: pid=98554: Sun Jul 21 16:38:00 2024 00:22:44.751 read: IOPS=204, BW=818KiB/s (838kB/s)(8204KiB/10027msec) 00:22:44.751 slat (usec): min=4, max=8046, avg=29.70, stdev=353.95 00:22:44.751 clat (msec): min=31, max=140, avg=77.93, stdev=20.59 00:22:44.751 lat (msec): min=31, max=140, avg=77.96, stdev=20.58 00:22:44.751 clat percentiles (msec): 00:22:44.751 | 1.00th=[ 38], 5.00th=[ 47], 10.00th=[ 52], 20.00th=[ 61], 00:22:44.751 | 30.00th=[ 68], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 82], 00:22:44.751 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 116], 00:22:44.751 | 99.00th=[ 133], 99.50th=[ 136], 99.90th=[ 140], 99.95th=[ 140], 00:22:44.751 | 99.99th=[ 140] 00:22:44.751 bw ( KiB/s): min= 640, max= 1088, per=3.73%, avg=817.65, stdev=104.66, samples=20 00:22:44.751 iops : min= 160, max= 272, avg=204.35, stdev=26.18, samples=20 00:22:44.751 lat (msec) : 50=8.68%, 100=74.74%, 250=16.58% 00:22:44.751 cpu : usr=32.63%, sys=0.52%, ctx=869, majf=0, minf=9 00:22:44.751 IO depths : 1=1.7%, 2=3.8%, 4=13.3%, 8=69.7%, 16=11.5%, 32=0.0%, >=64=0.0% 00:22:44.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.751 complete : 0=0.0%, 4=90.8%, 8=4.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.751 issued rwts: total=2051,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.751 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:44.751 filename1: (groupid=0, jobs=1): err= 0: pid=98555: Sun Jul 21 16:38:00 2024 00:22:44.751 read: IOPS=231, BW=927KiB/s (949kB/s)(9316KiB/10051msec) 00:22:44.751 slat (usec): min=4, max=4025, avg=15.45, stdev=117.28 00:22:44.751 clat (msec): min=32, max=146, avg=68.82, stdev=21.58 00:22:44.751 lat (msec): min=32, max=146, avg=68.84, stdev=21.58 00:22:44.751 clat percentiles (msec): 00:22:44.751 | 1.00th=[ 36], 5.00th=[ 39], 10.00th=[ 44], 20.00th=[ 48], 00:22:44.751 | 30.00th=[ 56], 40.00th=[ 63], 50.00th=[ 68], 60.00th=[ 72], 00:22:44.751 | 70.00th=[ 77], 80.00th=[ 86], 90.00th=[ 97], 95.00th=[ 111], 00:22:44.751 | 99.00th=[ 133], 99.50th=[ 146], 99.90th=[ 146], 99.95th=[ 146], 00:22:44.751 | 99.99th=[ 146] 00:22:44.751 bw ( KiB/s): min= 728, max= 1328, per=4.22%, avg=924.85, stdev=145.92, samples=20 00:22:44.751 iops : min= 182, max= 332, avg=231.20, stdev=36.50, samples=20 00:22:44.751 lat (msec) : 50=24.43%, 100=65.74%, 250=9.83% 00:22:44.751 cpu : usr=42.35%, sys=0.78%, ctx=1248, majf=0, minf=9 00:22:44.751 IO depths : 1=1.8%, 2=4.0%, 4=12.4%, 8=70.6%, 16=11.3%, 32=0.0%, >=64=0.0% 00:22:44.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.751 complete : 0=0.0%, 4=90.6%, 8=4.4%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.751 issued rwts: total=2329,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.751 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:44.751 filename2: (groupid=0, jobs=1): err= 0: pid=98556: Sun Jul 21 16:38:00 2024 00:22:44.751 read: IOPS=221, BW=885KiB/s (906kB/s)(8888KiB/10047msec) 00:22:44.751 slat (usec): min=6, max=4003, avg=14.75, stdev=85.03 00:22:44.751 clat (msec): min=24, max=151, avg=72.24, stdev=18.67 00:22:44.751 lat (msec): min=24, max=151, avg=72.25, stdev=18.67 00:22:44.751 clat percentiles (msec): 00:22:44.751 | 1.00th=[ 39], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 58], 00:22:44.751 | 30.00th=[ 63], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 74], 00:22:44.751 | 70.00th=[ 80], 80.00th=[ 87], 90.00th=[ 99], 95.00th=[ 106], 00:22:44.751 | 99.00th=[ 123], 99.50th=[ 128], 99.90th=[ 153], 99.95th=[ 153], 00:22:44.751 | 99.99th=[ 153] 00:22:44.751 bw ( KiB/s): min= 656, max= 1232, per=4.02%, avg=882.20, stdev=132.50, samples=20 00:22:44.751 iops : min= 164, max= 308, avg=220.55, stdev=33.12, samples=20 00:22:44.751 lat (msec) : 50=11.79%, 100=79.75%, 250=8.46% 00:22:44.751 cpu : usr=43.72%, sys=0.85%, ctx=1248, majf=0, minf=9 00:22:44.751 IO depths : 1=1.8%, 2=4.1%, 4=12.1%, 8=70.0%, 16=12.1%, 32=0.0%, >=64=0.0% 00:22:44.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.751 complete : 0=0.0%, 4=90.8%, 8=4.9%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.751 issued rwts: total=2222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.751 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:44.751 filename2: (groupid=0, jobs=1): err= 0: pid=98557: Sun Jul 21 16:38:00 2024 00:22:44.751 read: IOPS=249, BW=998KiB/s (1021kB/s)(9.79MiB/10053msec) 00:22:44.751 slat (usec): min=4, max=6025, avg=14.32, stdev=120.36 00:22:44.751 clat (msec): min=13, max=139, avg=63.95, stdev=21.52 00:22:44.751 lat (msec): min=13, max=139, avg=63.96, stdev=21.53 00:22:44.751 clat percentiles (msec): 00:22:44.751 | 1.00th=[ 30], 5.00th=[ 38], 10.00th=[ 41], 20.00th=[ 45], 00:22:44.751 | 30.00th=[ 51], 40.00th=[ 55], 50.00th=[ 62], 60.00th=[ 68], 00:22:44.751 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 95], 95.00th=[ 105], 00:22:44.751 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 140], 99.95th=[ 140], 00:22:44.751 | 99.99th=[ 140] 00:22:44.751 bw ( KiB/s): min= 720, max= 1376, per=4.54%, avg=996.50, stdev=176.49, samples=20 00:22:44.751 iops : min= 180, max= 344, avg=249.10, stdev=44.11, samples=20 00:22:44.751 lat (msec) : 20=0.64%, 50=29.76%, 100=62.86%, 250=6.74% 00:22:44.751 cpu : usr=43.50%, sys=0.73%, ctx=1137, majf=0, minf=9 00:22:44.751 IO depths : 1=1.0%, 2=2.7%, 4=10.5%, 8=73.5%, 16=12.2%, 32=0.0%, >=64=0.0% 00:22:44.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.751 complete : 0=0.0%, 4=90.0%, 8=5.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.751 issued rwts: total=2507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.751 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:44.751 filename2: (groupid=0, jobs=1): err= 0: pid=98558: Sun Jul 21 16:38:00 2024 00:22:44.751 read: IOPS=214, BW=858KiB/s (879kB/s)(8604KiB/10029msec) 00:22:44.751 slat (usec): min=6, max=8031, avg=20.73, stdev=244.44 00:22:44.751 clat (msec): min=30, max=194, avg=74.43, stdev=23.41 00:22:44.751 lat (msec): min=30, max=194, avg=74.45, stdev=23.42 00:22:44.751 clat percentiles (msec): 00:22:44.751 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 58], 00:22:44.751 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 71], 60.00th=[ 72], 00:22:44.751 | 70.00th=[ 83], 80.00th=[ 91], 90.00th=[ 105], 95.00th=[ 121], 00:22:44.751 | 99.00th=[ 148], 99.50th=[ 159], 99.90th=[ 194], 99.95th=[ 194], 00:22:44.751 | 99.99th=[ 194] 00:22:44.751 bw ( KiB/s): min= 512, max= 1120, per=3.89%, avg=853.95, stdev=128.47, samples=20 00:22:44.751 iops : min= 128, max= 280, avg=213.45, stdev=32.13, samples=20 00:22:44.751 lat (msec) : 50=14.78%, 100=72.66%, 250=12.55% 00:22:44.751 cpu : usr=32.48%, sys=0.63%, ctx=929, majf=0, minf=9 00:22:44.751 IO depths : 1=1.5%, 2=3.2%, 4=10.6%, 8=72.6%, 16=12.1%, 32=0.0%, >=64=0.0% 00:22:44.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.751 complete : 0=0.0%, 4=90.4%, 8=5.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.751 issued rwts: total=2151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.751 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:44.751 filename2: (groupid=0, jobs=1): err= 0: pid=98559: Sun Jul 21 16:38:00 2024 00:22:44.751 read: IOPS=254, BW=1020KiB/s (1044kB/s)(10.0MiB/10041msec) 00:22:44.751 slat (usec): min=5, max=4018, avg=17.16, stdev=149.44 00:22:44.751 clat (msec): min=27, max=151, avg=62.53, stdev=20.11 00:22:44.751 lat (msec): min=27, max=151, avg=62.54, stdev=20.12 00:22:44.751 clat percentiles (msec): 00:22:44.751 | 1.00th=[ 33], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 46], 00:22:44.751 | 30.00th=[ 49], 40.00th=[ 55], 50.00th=[ 61], 60.00th=[ 66], 00:22:44.751 | 70.00th=[ 70], 80.00th=[ 78], 90.00th=[ 87], 95.00th=[ 103], 00:22:44.751 | 99.00th=[ 128], 99.50th=[ 131], 99.90th=[ 153], 99.95th=[ 153], 00:22:44.751 | 99.99th=[ 153] 00:22:44.751 bw ( KiB/s): min= 768, max= 1248, per=4.64%, avg=1017.60, stdev=156.11, samples=20 00:22:44.751 iops : min= 192, max= 312, avg=254.40, stdev=39.03, samples=20 00:22:44.751 lat (msec) : 50=33.71%, 100=60.59%, 250=5.70% 00:22:44.751 cpu : usr=42.20%, sys=0.61%, ctx=1299, majf=0, minf=9 00:22:44.751 IO depths : 1=0.7%, 2=1.6%, 4=7.7%, 8=76.8%, 16=13.3%, 32=0.0%, >=64=0.0% 00:22:44.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.751 complete : 0=0.0%, 4=89.6%, 8=6.3%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.751 issued rwts: total=2560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.751 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:44.751 filename2: (groupid=0, jobs=1): err= 0: pid=98560: Sun Jul 21 16:38:00 2024 00:22:44.751 read: IOPS=200, BW=803KiB/s (822kB/s)(8048KiB/10027msec) 00:22:44.751 slat (usec): min=4, max=8023, avg=19.29, stdev=200.07 00:22:44.751 clat (msec): min=35, max=159, avg=79.57, stdev=22.41 00:22:44.751 lat (msec): min=35, max=159, avg=79.59, stdev=22.41 00:22:44.751 clat percentiles (msec): 00:22:44.751 | 1.00th=[ 38], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 62], 00:22:44.751 | 30.00th=[ 69], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 78], 00:22:44.751 | 70.00th=[ 86], 80.00th=[ 99], 90.00th=[ 116], 95.00th=[ 123], 00:22:44.751 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 161], 99.95th=[ 161], 00:22:44.751 | 99.99th=[ 161] 00:22:44.751 bw ( KiB/s): min= 600, max= 1024, per=3.64%, avg=798.00, stdev=122.57, samples=20 00:22:44.751 iops : min= 150, max= 256, avg=199.45, stdev=30.61, samples=20 00:22:44.751 lat (msec) : 50=7.11%, 100=75.60%, 250=17.30% 00:22:44.751 cpu : usr=33.80%, sys=0.73%, ctx=915, majf=0, minf=9 00:22:44.751 IO depths : 1=3.2%, 2=7.3%, 4=18.8%, 8=61.2%, 16=9.5%, 32=0.0%, >=64=0.0% 00:22:44.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.751 complete : 0=0.0%, 4=92.2%, 8=2.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.751 issued rwts: total=2012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.751 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:44.751 filename2: (groupid=0, jobs=1): err= 0: pid=98561: Sun Jul 21 16:38:00 2024 00:22:44.751 read: IOPS=251, BW=1007KiB/s (1032kB/s)(9.86MiB/10026msec) 00:22:44.751 slat (usec): min=6, max=5380, avg=17.67, stdev=154.99 00:22:44.751 clat (msec): min=23, max=172, avg=63.35, stdev=20.85 00:22:44.751 lat (msec): min=23, max=172, avg=63.37, stdev=20.85 00:22:44.751 clat percentiles (msec): 00:22:44.751 | 1.00th=[ 34], 5.00th=[ 39], 10.00th=[ 42], 20.00th=[ 46], 00:22:44.751 | 30.00th=[ 51], 40.00th=[ 55], 50.00th=[ 61], 60.00th=[ 66], 00:22:44.751 | 70.00th=[ 71], 80.00th=[ 77], 90.00th=[ 93], 95.00th=[ 104], 00:22:44.751 | 99.00th=[ 134], 99.50th=[ 134], 99.90th=[ 174], 99.95th=[ 174], 00:22:44.751 | 99.99th=[ 174] 00:22:44.751 bw ( KiB/s): min= 656, max= 1296, per=4.59%, avg=1005.70, stdev=157.75, samples=20 00:22:44.751 iops : min= 164, max= 324, avg=251.40, stdev=39.44, samples=20 00:22:44.751 lat (msec) : 50=30.22%, 100=62.53%, 250=7.25% 00:22:44.751 cpu : usr=45.69%, sys=0.79%, ctx=1668, majf=0, minf=9 00:22:44.751 IO depths : 1=1.6%, 2=3.6%, 4=11.6%, 8=71.6%, 16=11.4%, 32=0.0%, >=64=0.0% 00:22:44.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.752 complete : 0=0.0%, 4=90.4%, 8=4.6%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.752 issued rwts: total=2525,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.752 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:44.752 filename2: (groupid=0, jobs=1): err= 0: pid=98562: Sun Jul 21 16:38:00 2024 00:22:44.752 read: IOPS=260, BW=1044KiB/s (1069kB/s)(10.2MiB/10049msec) 00:22:44.752 slat (usec): min=4, max=8032, avg=16.61, stdev=175.28 00:22:44.752 clat (msec): min=20, max=140, avg=61.13, stdev=18.61 00:22:44.752 lat (msec): min=20, max=140, avg=61.14, stdev=18.62 00:22:44.752 clat percentiles (msec): 00:22:44.752 | 1.00th=[ 33], 5.00th=[ 38], 10.00th=[ 41], 20.00th=[ 46], 00:22:44.752 | 30.00th=[ 49], 40.00th=[ 54], 50.00th=[ 59], 60.00th=[ 65], 00:22:44.752 | 70.00th=[ 68], 80.00th=[ 73], 90.00th=[ 86], 95.00th=[ 101], 00:22:44.752 | 99.00th=[ 116], 99.50th=[ 133], 99.90th=[ 140], 99.95th=[ 140], 00:22:44.752 | 99.99th=[ 142] 00:22:44.752 bw ( KiB/s): min= 832, max= 1296, per=4.76%, avg=1044.80, stdev=128.77, samples=20 00:22:44.752 iops : min= 208, max= 324, avg=261.20, stdev=32.19, samples=20 00:22:44.752 lat (msec) : 50=32.95%, 100=61.98%, 250=5.07% 00:22:44.752 cpu : usr=42.80%, sys=0.86%, ctx=1385, majf=0, minf=9 00:22:44.752 IO depths : 1=1.3%, 2=2.9%, 4=11.1%, 8=72.9%, 16=11.8%, 32=0.0%, >=64=0.0% 00:22:44.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.752 complete : 0=0.0%, 4=90.1%, 8=4.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.752 issued rwts: total=2622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.752 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:44.752 filename2: (groupid=0, jobs=1): err= 0: pid=98563: Sun Jul 21 16:38:00 2024 00:22:44.752 read: IOPS=211, BW=846KiB/s (866kB/s)(8480KiB/10026msec) 00:22:44.752 slat (usec): min=4, max=8029, avg=19.46, stdev=194.90 00:22:44.752 clat (msec): min=34, max=171, avg=75.51, stdev=21.52 00:22:44.752 lat (msec): min=34, max=171, avg=75.53, stdev=21.52 00:22:44.752 clat percentiles (msec): 00:22:44.752 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 59], 00:22:44.752 | 30.00th=[ 64], 40.00th=[ 68], 50.00th=[ 71], 60.00th=[ 75], 00:22:44.752 | 70.00th=[ 81], 80.00th=[ 90], 90.00th=[ 108], 95.00th=[ 120], 00:22:44.752 | 99.00th=[ 142], 99.50th=[ 150], 99.90th=[ 171], 99.95th=[ 171], 00:22:44.752 | 99.99th=[ 171] 00:22:44.752 bw ( KiB/s): min= 640, max= 1072, per=3.84%, avg=841.25, stdev=122.23, samples=20 00:22:44.752 iops : min= 160, max= 268, avg=210.30, stdev=30.53, samples=20 00:22:44.752 lat (msec) : 50=8.68%, 100=78.16%, 250=13.16% 00:22:44.752 cpu : usr=41.79%, sys=0.79%, ctx=1413, majf=0, minf=9 00:22:44.752 IO depths : 1=1.9%, 2=4.3%, 4=12.6%, 8=69.2%, 16=12.0%, 32=0.0%, >=64=0.0% 00:22:44.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.752 complete : 0=0.0%, 4=91.1%, 8=4.5%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.752 issued rwts: total=2120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.752 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:44.752 00:22:44.752 Run status group 0 (all jobs): 00:22:44.752 READ: bw=21.4MiB/s (22.4MB/s), 789KiB/s-1091KiB/s (808kB/s-1118kB/s), io=215MiB (226MB), run=10001-10063msec 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:44.752 bdev_null0 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:44.752 [2024-07-21 16:38:01.304136] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:44.752 bdev_null1 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:44.752 16:38:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:44.752 { 00:22:44.752 "params": { 00:22:44.752 "name": "Nvme$subsystem", 00:22:44.752 "trtype": "$TEST_TRANSPORT", 00:22:44.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.753 "adrfam": "ipv4", 00:22:44.753 "trsvcid": "$NVMF_PORT", 00:22:44.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.753 "hdgst": ${hdgst:-false}, 00:22:44.753 "ddgst": ${ddgst:-false} 00:22:44.753 }, 00:22:44.753 "method": "bdev_nvme_attach_controller" 00:22:44.753 } 00:22:44.753 EOF 00:22:44.753 )") 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:44.753 { 00:22:44.753 "params": { 00:22:44.753 "name": "Nvme$subsystem", 00:22:44.753 "trtype": "$TEST_TRANSPORT", 00:22:44.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.753 "adrfam": "ipv4", 00:22:44.753 "trsvcid": "$NVMF_PORT", 00:22:44.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.753 "hdgst": ${hdgst:-false}, 00:22:44.753 "ddgst": ${ddgst:-false} 00:22:44.753 }, 00:22:44.753 "method": "bdev_nvme_attach_controller" 00:22:44.753 } 00:22:44.753 EOF 00:22:44.753 )") 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:44.753 "params": { 00:22:44.753 "name": "Nvme0", 00:22:44.753 "trtype": "tcp", 00:22:44.753 "traddr": "10.0.0.2", 00:22:44.753 "adrfam": "ipv4", 00:22:44.753 "trsvcid": "4420", 00:22:44.753 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:44.753 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:44.753 "hdgst": false, 00:22:44.753 "ddgst": false 00:22:44.753 }, 00:22:44.753 "method": "bdev_nvme_attach_controller" 00:22:44.753 },{ 00:22:44.753 "params": { 00:22:44.753 "name": "Nvme1", 00:22:44.753 "trtype": "tcp", 00:22:44.753 "traddr": "10.0.0.2", 00:22:44.753 "adrfam": "ipv4", 00:22:44.753 "trsvcid": "4420", 00:22:44.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:44.753 "hdgst": false, 00:22:44.753 "ddgst": false 00:22:44.753 }, 00:22:44.753 "method": "bdev_nvme_attach_controller" 00:22:44.753 }' 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:44.753 16:38:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:44.753 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:44.753 ... 00:22:44.753 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:44.753 ... 00:22:44.753 fio-3.35 00:22:44.753 Starting 4 threads 00:22:50.017 00:22:50.017 filename0: (groupid=0, jobs=1): err= 0: pid=98695: Sun Jul 21 16:38:07 2024 00:22:50.017 read: IOPS=2077, BW=16.2MiB/s (17.0MB/s)(81.2MiB/5002msec) 00:22:50.017 slat (nsec): min=3969, max=74826, avg=14284.63, stdev=6583.22 00:22:50.017 clat (usec): min=2009, max=6721, avg=3778.35, stdev=264.38 00:22:50.017 lat (usec): min=2020, max=6730, avg=3792.64, stdev=264.16 00:22:50.017 clat percentiles (usec): 00:22:50.017 | 1.00th=[ 3261], 5.00th=[ 3458], 10.00th=[ 3523], 20.00th=[ 3589], 00:22:50.017 | 30.00th=[ 3654], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3818], 00:22:50.017 | 70.00th=[ 3851], 80.00th=[ 3949], 90.00th=[ 4080], 95.00th=[ 4228], 00:22:50.017 | 99.00th=[ 4555], 99.50th=[ 4686], 99.90th=[ 6325], 99.95th=[ 6390], 00:22:50.017 | 99.99th=[ 6718] 00:22:50.017 bw ( KiB/s): min=15744, max=17408, per=25.04%, avg=16672.11, stdev=480.07, samples=9 00:22:50.017 iops : min= 1968, max= 2176, avg=2084.00, stdev=60.00, samples=9 00:22:50.017 lat (msec) : 4=84.93%, 10=15.07% 00:22:50.017 cpu : usr=94.74%, sys=3.88%, ctx=8, majf=0, minf=9 00:22:50.017 IO depths : 1=11.6%, 2=25.0%, 4=50.0%, 8=13.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:50.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.017 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.017 issued rwts: total=10392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.017 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:50.017 filename0: (groupid=0, jobs=1): err= 0: pid=98696: Sun Jul 21 16:38:07 2024 00:22:50.017 read: IOPS=2087, BW=16.3MiB/s (17.1MB/s)(81.6MiB/5002msec) 00:22:50.017 slat (nsec): min=5952, max=72945, avg=9584.68, stdev=6244.67 00:22:50.017 clat (usec): min=941, max=7153, avg=3781.70, stdev=317.95 00:22:50.017 lat (usec): min=958, max=7159, avg=3791.29, stdev=317.87 00:22:50.017 clat percentiles (usec): 00:22:50.017 | 1.00th=[ 3163], 5.00th=[ 3458], 10.00th=[ 3523], 20.00th=[ 3621], 00:22:50.017 | 30.00th=[ 3654], 40.00th=[ 3720], 50.00th=[ 3752], 60.00th=[ 3818], 00:22:50.017 | 70.00th=[ 3884], 80.00th=[ 3949], 90.00th=[ 4080], 95.00th=[ 4228], 00:22:50.017 | 99.00th=[ 4490], 99.50th=[ 4621], 99.90th=[ 6194], 99.95th=[ 6390], 00:22:50.017 | 99.99th=[ 6587] 00:22:50.017 bw ( KiB/s): min=15872, max=17536, per=25.18%, avg=16766.22, stdev=551.04, samples=9 00:22:50.017 iops : min= 1984, max= 2192, avg=2095.78, stdev=68.88, samples=9 00:22:50.017 lat (usec) : 1000=0.06% 00:22:50.017 lat (msec) : 2=0.53%, 4=83.37%, 10=16.04% 00:22:50.017 cpu : usr=94.94%, sys=3.84%, ctx=5, majf=0, minf=0 00:22:50.017 IO depths : 1=10.6%, 2=24.0%, 4=50.9%, 8=14.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:50.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.017 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.017 issued rwts: total=10440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.017 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:50.017 filename1: (groupid=0, jobs=1): err= 0: pid=98697: Sun Jul 21 16:38:07 2024 00:22:50.017 read: IOPS=2080, BW=16.3MiB/s (17.0MB/s)(81.3MiB/5001msec) 00:22:50.017 slat (nsec): min=3716, max=69145, avg=12839.10, stdev=6918.63 00:22:50.017 clat (usec): min=942, max=6002, avg=3791.61, stdev=273.21 00:22:50.017 lat (usec): min=949, max=6015, avg=3804.45, stdev=272.66 00:22:50.017 clat percentiles (usec): 00:22:50.017 | 1.00th=[ 3195], 5.00th=[ 3425], 10.00th=[ 3523], 20.00th=[ 3589], 00:22:50.017 | 30.00th=[ 3654], 40.00th=[ 3720], 50.00th=[ 3752], 60.00th=[ 3818], 00:22:50.017 | 70.00th=[ 3884], 80.00th=[ 3982], 90.00th=[ 4113], 95.00th=[ 4293], 00:22:50.017 | 99.00th=[ 4621], 99.50th=[ 4817], 99.90th=[ 5211], 99.95th=[ 5473], 00:22:50.017 | 99.99th=[ 5997] 00:22:50.017 bw ( KiB/s): min=15792, max=17408, per=25.06%, avg=16688.00, stdev=475.98, samples=9 00:22:50.017 iops : min= 1974, max= 2176, avg=2086.00, stdev=59.50, samples=9 00:22:50.017 lat (usec) : 1000=0.03% 00:22:50.017 lat (msec) : 4=82.69%, 10=17.28% 00:22:50.017 cpu : usr=94.98%, sys=3.72%, ctx=30, majf=0, minf=0 00:22:50.017 IO depths : 1=7.0%, 2=14.7%, 4=60.2%, 8=18.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:50.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.017 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.017 issued rwts: total=10403,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.017 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:50.017 filename1: (groupid=0, jobs=1): err= 0: pid=98698: Sun Jul 21 16:38:07 2024 00:22:50.017 read: IOPS=2079, BW=16.2MiB/s (17.0MB/s)(81.2MiB/5001msec) 00:22:50.017 slat (nsec): min=5002, max=65488, avg=12640.99, stdev=5485.72 00:22:50.017 clat (usec): min=1181, max=6652, avg=3784.41, stdev=288.66 00:22:50.017 lat (usec): min=1187, max=6658, avg=3797.05, stdev=288.24 00:22:50.017 clat percentiles (usec): 00:22:50.017 | 1.00th=[ 3195], 5.00th=[ 3425], 10.00th=[ 3523], 20.00th=[ 3589], 00:22:50.017 | 30.00th=[ 3654], 40.00th=[ 3720], 50.00th=[ 3752], 60.00th=[ 3818], 00:22:50.017 | 70.00th=[ 3884], 80.00th=[ 3949], 90.00th=[ 4080], 95.00th=[ 4228], 00:22:50.017 | 99.00th=[ 4555], 99.50th=[ 4817], 99.90th=[ 5866], 99.95th=[ 6063], 00:22:50.017 | 99.99th=[ 6521] 00:22:50.017 bw ( KiB/s): min=15744, max=17408, per=25.04%, avg=16672.11, stdev=480.07, samples=9 00:22:50.017 iops : min= 1968, max= 2176, avg=2084.00, stdev=60.00, samples=9 00:22:50.017 lat (msec) : 2=0.12%, 4=84.31%, 10=15.58% 00:22:50.017 cpu : usr=94.68%, sys=4.02%, ctx=3, majf=0, minf=9 00:22:50.017 IO depths : 1=10.3%, 2=25.0%, 4=50.0%, 8=14.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:50.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.017 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.017 issued rwts: total=10400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.017 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:50.017 00:22:50.017 Run status group 0 (all jobs): 00:22:50.017 READ: bw=65.0MiB/s (68.2MB/s), 16.2MiB/s-16.3MiB/s (17.0MB/s-17.1MB/s), io=325MiB (341MB), run=5001-5002msec 00:22:50.017 16:38:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:22:50.017 16:38:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:50.017 16:38:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:50.017 16:38:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:50.017 16:38:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:50.017 16:38:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:50.017 16:38:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.017 16:38:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:50.017 16:38:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.017 16:38:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:50.017 16:38:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.017 16:38:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:50.017 16:38:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.017 16:38:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:50.017 16:38:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:50.017 16:38:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:22:50.017 16:38:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:50.017 16:38:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.017 16:38:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:50.017 16:38:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.017 16:38:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:50.017 16:38:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.017 16:38:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:50.017 16:38:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.017 00:22:50.017 real 0m23.733s 00:22:50.017 user 2m7.434s 00:22:50.017 sys 0m4.165s 00:22:50.017 16:38:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:50.017 ************************************ 00:22:50.017 END TEST fio_dif_rand_params 00:22:50.017 ************************************ 00:22:50.017 16:38:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:50.017 16:38:07 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:22:50.017 16:38:07 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:22:50.017 16:38:07 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:50.017 16:38:07 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:50.017 16:38:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:50.017 ************************************ 00:22:50.017 START TEST fio_dif_digest 00:22:50.017 ************************************ 00:22:50.017 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:22:50.017 16:38:07 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:22:50.017 16:38:07 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:22:50.017 16:38:07 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:22:50.017 16:38:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:22:50.017 16:38:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:22:50.017 16:38:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:22:50.017 16:38:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:22:50.017 16:38:07 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:50.018 bdev_null0 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:50.018 [2024-07-21 16:38:07.537491] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:50.018 { 00:22:50.018 "params": { 00:22:50.018 "name": "Nvme$subsystem", 00:22:50.018 "trtype": "$TEST_TRANSPORT", 00:22:50.018 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.018 "adrfam": "ipv4", 00:22:50.018 "trsvcid": "$NVMF_PORT", 00:22:50.018 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.018 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.018 "hdgst": ${hdgst:-false}, 00:22:50.018 "ddgst": ${ddgst:-false} 00:22:50.018 }, 00:22:50.018 "method": "bdev_nvme_attach_controller" 00:22:50.018 } 00:22:50.018 EOF 00:22:50.018 )") 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:50.018 "params": { 00:22:50.018 "name": "Nvme0", 00:22:50.018 "trtype": "tcp", 00:22:50.018 "traddr": "10.0.0.2", 00:22:50.018 "adrfam": "ipv4", 00:22:50.018 "trsvcid": "4420", 00:22:50.018 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:50.018 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:50.018 "hdgst": true, 00:22:50.018 "ddgst": true 00:22:50.018 }, 00:22:50.018 "method": "bdev_nvme_attach_controller" 00:22:50.018 }' 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:50.018 16:38:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:50.018 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:50.018 ... 00:22:50.018 fio-3.35 00:22:50.018 Starting 3 threads 00:23:02.233 00:23:02.233 filename0: (groupid=0, jobs=1): err= 0: pid=98801: Sun Jul 21 16:38:18 2024 00:23:02.233 read: IOPS=210, BW=26.4MiB/s (27.6MB/s)(264MiB/10006msec) 00:23:02.233 slat (nsec): min=6517, max=61665, avg=14248.73, stdev=6486.76 00:23:02.233 clat (usec): min=5983, max=54211, avg=14209.84, stdev=10752.95 00:23:02.233 lat (usec): min=5990, max=54230, avg=14224.09, stdev=10753.07 00:23:02.233 clat percentiles (usec): 00:23:02.233 | 1.00th=[ 9503], 5.00th=[10028], 10.00th=[10290], 20.00th=[10683], 00:23:02.233 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:23:02.233 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12387], 95.00th=[51643], 00:23:02.233 | 99.00th=[52691], 99.50th=[52691], 99.90th=[53216], 99.95th=[53216], 00:23:02.233 | 99.99th=[54264] 00:23:02.233 bw ( KiB/s): min=20480, max=31232, per=31.59%, avg=27082.11, stdev=3552.96, samples=19 00:23:02.233 iops : min= 160, max= 244, avg=211.58, stdev=27.76, samples=19 00:23:02.233 lat (msec) : 10=4.74%, 20=87.73%, 50=0.09%, 100=7.44% 00:23:02.233 cpu : usr=95.02%, sys=3.80%, ctx=46, majf=0, minf=0 00:23:02.233 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:02.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:02.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:02.233 issued rwts: total=2110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:02.233 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:02.233 filename0: (groupid=0, jobs=1): err= 0: pid=98802: Sun Jul 21 16:38:18 2024 00:23:02.233 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(265MiB/10005msec) 00:23:02.233 slat (nsec): min=6700, max=75390, avg=16297.64, stdev=7017.58 00:23:02.233 clat (usec): min=8794, max=20082, avg=14156.89, stdev=2698.82 00:23:02.233 lat (usec): min=8815, max=20110, avg=14173.19, stdev=2698.35 00:23:02.233 clat percentiles (usec): 00:23:02.233 | 1.00th=[ 9110], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10421], 00:23:02.233 | 30.00th=[12780], 40.00th=[15008], 50.00th=[15401], 60.00th=[15664], 00:23:02.233 | 70.00th=[16057], 80.00th=[16319], 90.00th=[16712], 95.00th=[17171], 00:23:02.233 | 99.00th=[17695], 99.50th=[17695], 99.90th=[20055], 99.95th=[20055], 00:23:02.233 | 99.99th=[20055] 00:23:02.233 bw ( KiB/s): min=24576, max=30525, per=31.50%, avg=27004.47, stdev=1731.28, samples=19 00:23:02.233 iops : min= 192, max= 238, avg=210.95, stdev=13.47, samples=19 00:23:02.233 lat (msec) : 10=13.93%, 20=85.92%, 50=0.14% 00:23:02.233 cpu : usr=94.96%, sys=3.68%, ctx=141, majf=0, minf=9 00:23:02.233 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:02.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:02.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:02.233 issued rwts: total=2117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:02.233 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:02.233 filename0: (groupid=0, jobs=1): err= 0: pid=98803: Sun Jul 21 16:38:18 2024 00:23:02.233 read: IOPS=247, BW=30.9MiB/s (32.4MB/s)(309MiB/10004msec) 00:23:02.233 slat (nsec): min=6302, max=94658, avg=18386.84, stdev=8201.18 00:23:02.233 clat (usec): min=6595, max=21690, avg=12105.98, stdev=2549.52 00:23:02.233 lat (usec): min=6614, max=21724, avg=12124.37, stdev=2550.01 00:23:02.233 clat percentiles (usec): 00:23:02.233 | 1.00th=[ 7439], 5.00th=[ 7898], 10.00th=[ 8225], 20.00th=[ 8848], 00:23:02.233 | 30.00th=[10552], 40.00th=[12387], 50.00th=[13042], 60.00th=[13435], 00:23:02.233 | 70.00th=[13829], 80.00th=[14222], 90.00th=[14877], 95.00th=[15270], 00:23:02.233 | 99.00th=[16188], 99.50th=[16319], 99.90th=[19530], 99.95th=[19530], 00:23:02.233 | 99.99th=[21627] 00:23:02.233 bw ( KiB/s): min=28416, max=35072, per=36.86%, avg=31599.26, stdev=1885.16, samples=19 00:23:02.233 iops : min= 222, max= 274, avg=246.84, stdev=14.70, samples=19 00:23:02.233 lat (msec) : 10=28.17%, 20=71.79%, 50=0.04% 00:23:02.234 cpu : usr=93.75%, sys=4.55%, ctx=84, majf=0, minf=9 00:23:02.234 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:02.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:02.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:02.234 issued rwts: total=2474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:02.234 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:02.234 00:23:02.234 Run status group 0 (all jobs): 00:23:02.234 READ: bw=83.7MiB/s (87.8MB/s), 26.4MiB/s-30.9MiB/s (27.6MB/s-32.4MB/s), io=838MiB (878MB), run=10004-10006msec 00:23:02.234 16:38:18 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:23:02.234 16:38:18 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:23:02.234 16:38:18 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:23:02.234 16:38:18 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:02.234 16:38:18 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:23:02.234 16:38:18 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:02.234 16:38:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.234 16:38:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:02.234 16:38:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.234 16:38:18 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:02.234 16:38:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.234 16:38:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:02.234 16:38:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.234 00:23:02.234 real 0m11.044s 00:23:02.234 user 0m29.063s 00:23:02.234 sys 0m1.497s 00:23:02.234 16:38:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:02.234 ************************************ 00:23:02.234 END TEST fio_dif_digest 00:23:02.234 ************************************ 00:23:02.234 16:38:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:02.234 16:38:18 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:23:02.234 16:38:18 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:23:02.234 16:38:18 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:23:02.234 16:38:18 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:02.234 16:38:18 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:23:02.234 16:38:18 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:02.234 16:38:18 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:23:02.234 16:38:18 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:02.234 16:38:18 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:02.234 rmmod nvme_tcp 00:23:02.234 rmmod nvme_fabrics 00:23:02.234 rmmod nvme_keyring 00:23:02.234 16:38:18 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:02.234 16:38:18 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:23:02.234 16:38:18 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:23:02.234 16:38:18 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 98038 ']' 00:23:02.234 16:38:18 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 98038 00:23:02.234 16:38:18 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 98038 ']' 00:23:02.234 16:38:18 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 98038 00:23:02.234 16:38:18 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:23:02.234 16:38:18 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:02.234 16:38:18 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98038 00:23:02.234 16:38:18 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:02.234 16:38:18 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:02.234 killing process with pid 98038 00:23:02.234 16:38:18 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98038' 00:23:02.234 16:38:18 nvmf_dif -- common/autotest_common.sh@967 -- # kill 98038 00:23:02.234 16:38:18 nvmf_dif -- common/autotest_common.sh@972 -- # wait 98038 00:23:02.234 16:38:19 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:23:02.234 16:38:19 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:02.234 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:02.234 Waiting for block devices as requested 00:23:02.234 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:02.234 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:02.234 16:38:19 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:02.234 16:38:19 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:02.234 16:38:19 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:02.234 16:38:19 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:02.234 16:38:19 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.234 16:38:19 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:02.234 16:38:19 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.234 16:38:19 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:02.234 ************************************ 00:23:02.234 END TEST nvmf_dif 00:23:02.234 ************************************ 00:23:02.234 00:23:02.234 real 1m0.336s 00:23:02.234 user 3m53.882s 00:23:02.234 sys 0m13.343s 00:23:02.234 16:38:19 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:02.234 16:38:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:02.234 16:38:19 -- common/autotest_common.sh@1142 -- # return 0 00:23:02.234 16:38:19 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:02.234 16:38:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:02.234 16:38:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:02.234 16:38:19 -- common/autotest_common.sh@10 -- # set +x 00:23:02.234 ************************************ 00:23:02.234 START TEST nvmf_abort_qd_sizes 00:23:02.234 ************************************ 00:23:02.234 16:38:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:02.234 * Looking for test storage... 00:23:02.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:02.234 16:38:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:02.234 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:23:02.234 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:02.234 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:02.234 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:02.234 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:02.234 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:02.234 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:02.234 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:02.234 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:02.234 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:02.234 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:02.234 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:23:02.234 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:23:02.234 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:02.234 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:02.234 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:02.234 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:02.235 Cannot find device "nvmf_tgt_br" 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:02.235 Cannot find device "nvmf_tgt_br2" 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:02.235 Cannot find device "nvmf_tgt_br" 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:02.235 Cannot find device "nvmf_tgt_br2" 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:02.235 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:02.235 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:02.235 16:38:19 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:02.235 16:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:02.235 16:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:02.235 16:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:02.235 16:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:02.235 16:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:02.235 16:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:02.235 16:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:02.235 16:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:02.235 16:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:02.235 16:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:02.235 16:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:02.235 16:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:02.235 16:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:02.236 16:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:02.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:02.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:23:02.236 00:23:02.236 --- 10.0.0.2 ping statistics --- 00:23:02.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.236 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:23:02.236 16:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:02.236 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:02.236 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:23:02.236 00:23:02.236 --- 10.0.0.3 ping statistics --- 00:23:02.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.236 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:23:02.236 16:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:02.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:02.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:23:02.236 00:23:02.236 --- 10.0.0.1 ping statistics --- 00:23:02.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.236 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:23:02.236 16:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:02.236 16:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:23:02.236 16:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:23:02.236 16:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:02.802 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:02.802 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:02.803 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:02.803 16:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:02.803 16:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:02.803 16:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:02.803 16:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:02.803 16:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:02.803 16:38:20 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:03.061 16:38:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:23:03.062 16:38:21 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:03.062 16:38:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:03.062 16:38:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:03.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.062 16:38:21 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=99394 00:23:03.062 16:38:21 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 99394 00:23:03.062 16:38:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 99394 ']' 00:23:03.062 16:38:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.062 16:38:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:03.062 16:38:21 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:23:03.062 16:38:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.062 16:38:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:03.062 16:38:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:03.062 [2024-07-21 16:38:21.090569] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:23:03.062 [2024-07-21 16:38:21.090678] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:03.062 [2024-07-21 16:38:21.235079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:03.320 [2024-07-21 16:38:21.352931] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:03.320 [2024-07-21 16:38:21.353013] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:03.320 [2024-07-21 16:38:21.353030] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:03.320 [2024-07-21 16:38:21.353043] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:03.320 [2024-07-21 16:38:21.353054] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:03.320 [2024-07-21 16:38:21.353238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.320 [2024-07-21 16:38:21.353953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:03.320 [2024-07-21 16:38:21.354092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:03.320 [2024-07-21 16:38:21.354103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.885 16:38:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:03.885 16:38:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:23:03.885 16:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:03.885 16:38:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:03.885 16:38:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:04.144 16:38:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:04.144 ************************************ 00:23:04.144 START TEST spdk_target_abort 00:23:04.144 ************************************ 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:04.144 spdk_targetn1 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:04.144 [2024-07-21 16:38:22.246215] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:04.144 [2024-07-21 16:38:22.274521] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:04.144 16:38:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:07.424 Initializing NVMe Controllers 00:23:07.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:07.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:07.424 Initialization complete. Launching workers. 00:23:07.424 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9485, failed: 0 00:23:07.424 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1218, failed to submit 8267 00:23:07.424 success 757, unsuccess 461, failed 0 00:23:07.424 16:38:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:07.424 16:38:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:10.706 Initializing NVMe Controllers 00:23:10.706 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:10.706 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:10.706 Initialization complete. Launching workers. 00:23:10.706 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5980, failed: 0 00:23:10.706 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1230, failed to submit 4750 00:23:10.706 success 270, unsuccess 960, failed 0 00:23:10.706 16:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:10.707 16:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:13.986 Initializing NVMe Controllers 00:23:13.986 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:13.986 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:13.986 Initialization complete. Launching workers. 00:23:13.986 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29803, failed: 0 00:23:13.986 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2736, failed to submit 27067 00:23:13.986 success 338, unsuccess 2398, failed 0 00:23:13.986 16:38:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:23:13.986 16:38:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.986 16:38:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:13.986 16:38:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.986 16:38:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:23:13.986 16:38:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.986 16:38:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:14.936 16:38:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.936 16:38:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 99394 00:23:14.936 16:38:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 99394 ']' 00:23:14.936 16:38:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 99394 00:23:14.936 16:38:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:23:14.936 16:38:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:14.936 16:38:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99394 00:23:14.936 16:38:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:14.936 16:38:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:14.936 killing process with pid 99394 00:23:14.936 16:38:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99394' 00:23:14.936 16:38:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 99394 00:23:14.936 16:38:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 99394 00:23:14.936 00:23:14.936 real 0m10.958s 00:23:14.936 user 0m44.301s 00:23:14.936 sys 0m1.878s 00:23:14.936 16:38:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:14.936 16:38:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:14.936 ************************************ 00:23:14.936 END TEST spdk_target_abort 00:23:14.936 ************************************ 00:23:15.197 16:38:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:23:15.197 16:38:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:23:15.197 16:38:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:15.197 16:38:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:15.197 16:38:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:15.197 ************************************ 00:23:15.197 START TEST kernel_target_abort 00:23:15.197 ************************************ 00:23:15.197 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:23:15.197 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:23:15.197 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:23:15.197 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.197 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.197 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.197 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.197 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.197 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.197 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.197 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.197 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.197 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:15.197 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:15.197 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:15.197 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:15.197 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:15.197 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:15.197 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:23:15.197 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:15.197 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:15.197 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:15.197 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:15.455 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:15.455 Waiting for block devices as requested 00:23:15.455 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:15.712 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:15.712 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:15.712 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:15.712 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:15.712 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:15.712 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:15.712 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:15.712 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:15.712 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:15.712 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:15.712 No valid GPT data, bailing 00:23:15.712 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:15.712 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:15.712 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:15.712 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:15.712 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:15.712 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:15.712 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:23:15.712 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:23:15.712 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:15.712 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:15.712 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:23:15.712 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:23:15.712 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:15.712 No valid GPT data, bailing 00:23:15.712 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:15.712 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:15.712 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:15.712 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:23:15.713 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:15.713 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:15.713 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:23:15.713 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:23:15.713 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:15.713 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:15.713 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:23:15.713 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:23:15.713 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:15.970 No valid GPT data, bailing 00:23:15.970 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:15.970 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:15.970 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:15.970 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:23:15.970 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:15.970 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:15.970 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:23:15.970 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:23:15.970 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:15.970 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:15.970 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:23:15.970 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:23:15.970 16:38:33 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:15.970 No valid GPT data, bailing 00:23:15.970 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:15.970 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:15.970 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:15.970 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:23:15.970 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:23:15.970 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:15.970 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:15.970 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:15.970 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:15.970 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:23:15.970 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:23:15.970 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:23:15.970 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:15.970 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:23:15.970 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:23:15.970 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:23:15.970 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:15.970 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f --hostid=93b2ddbc-e521-435a-846f-e1bc9c67a86f -a 10.0.0.1 -t tcp -s 4420 00:23:15.970 00:23:15.970 Discovery Log Number of Records 2, Generation counter 2 00:23:15.970 =====Discovery Log Entry 0====== 00:23:15.970 trtype: tcp 00:23:15.970 adrfam: ipv4 00:23:15.970 subtype: current discovery subsystem 00:23:15.970 treq: not specified, sq flow control disable supported 00:23:15.970 portid: 1 00:23:15.970 trsvcid: 4420 00:23:15.970 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:15.970 traddr: 10.0.0.1 00:23:15.970 eflags: none 00:23:15.970 sectype: none 00:23:15.970 =====Discovery Log Entry 1====== 00:23:15.970 trtype: tcp 00:23:15.970 adrfam: ipv4 00:23:15.970 subtype: nvme subsystem 00:23:15.970 treq: not specified, sq flow control disable supported 00:23:15.970 portid: 1 00:23:15.971 trsvcid: 4420 00:23:15.971 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:15.971 traddr: 10.0.0.1 00:23:15.971 eflags: none 00:23:15.971 sectype: none 00:23:15.971 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:23:15.971 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:15.971 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:15.971 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:23:15.971 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:15.971 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:15.971 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:15.971 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:15.971 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:15.971 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:15.971 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:15.971 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:15.971 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:15.971 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:15.971 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:23:15.971 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:15.971 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:23:15.971 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:15.971 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:15.971 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:15.971 16:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:19.263 Initializing NVMe Controllers 00:23:19.263 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:19.263 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:19.263 Initialization complete. Launching workers. 00:23:19.263 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31119, failed: 0 00:23:19.263 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31119, failed to submit 0 00:23:19.263 success 0, unsuccess 31119, failed 0 00:23:19.263 16:38:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:19.263 16:38:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:22.547 Initializing NVMe Controllers 00:23:22.547 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:22.547 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:22.547 Initialization complete. Launching workers. 00:23:22.547 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 74346, failed: 0 00:23:22.547 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30664, failed to submit 43682 00:23:22.547 success 0, unsuccess 30664, failed 0 00:23:22.547 16:38:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:22.547 16:38:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:25.828 Initializing NVMe Controllers 00:23:25.828 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:25.828 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:25.828 Initialization complete. Launching workers. 00:23:25.828 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 91713, failed: 0 00:23:25.828 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22924, failed to submit 68789 00:23:25.828 success 0, unsuccess 22924, failed 0 00:23:25.828 16:38:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:23:25.828 16:38:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:25.828 16:38:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:23:25.828 16:38:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:25.828 16:38:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:25.828 16:38:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:25.828 16:38:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:25.828 16:38:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:25.828 16:38:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:25.828 16:38:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:26.393 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:28.922 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:28.922 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:28.922 00:23:28.922 real 0m13.902s 00:23:28.922 user 0m5.677s 00:23:28.922 sys 0m5.259s 00:23:28.922 ************************************ 00:23:28.922 END TEST kernel_target_abort 00:23:28.922 ************************************ 00:23:28.922 16:38:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:28.922 16:38:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:28.922 16:38:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:23:28.922 16:38:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:28.922 16:38:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:23:28.922 16:38:47 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:28.922 16:38:47 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:23:29.179 16:38:47 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:29.179 16:38:47 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:23:29.179 16:38:47 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:29.179 16:38:47 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:29.179 rmmod nvme_tcp 00:23:29.179 rmmod nvme_fabrics 00:23:29.179 rmmod nvme_keyring 00:23:29.179 16:38:47 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:29.179 16:38:47 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:23:29.179 16:38:47 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:23:29.179 16:38:47 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 99394 ']' 00:23:29.179 16:38:47 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 99394 00:23:29.179 16:38:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 99394 ']' 00:23:29.179 16:38:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 99394 00:23:29.179 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (99394) - No such process 00:23:29.179 Process with pid 99394 is not found 00:23:29.179 16:38:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 99394 is not found' 00:23:29.179 16:38:47 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:23:29.179 16:38:47 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:29.445 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:29.445 Waiting for block devices as requested 00:23:29.445 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:29.703 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:29.703 16:38:47 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:29.703 16:38:47 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:29.703 16:38:47 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:29.703 16:38:47 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:29.703 16:38:47 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.703 16:38:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:29.703 16:38:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.703 16:38:47 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:29.703 00:23:29.703 real 0m28.103s 00:23:29.703 user 0m51.146s 00:23:29.703 sys 0m8.406s 00:23:29.703 16:38:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:29.703 16:38:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:29.703 ************************************ 00:23:29.703 END TEST nvmf_abort_qd_sizes 00:23:29.703 ************************************ 00:23:29.703 16:38:47 -- common/autotest_common.sh@1142 -- # return 0 00:23:29.703 16:38:47 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:23:29.703 16:38:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:29.703 16:38:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:29.703 16:38:47 -- common/autotest_common.sh@10 -- # set +x 00:23:29.703 ************************************ 00:23:29.703 START TEST keyring_file 00:23:29.703 ************************************ 00:23:29.703 16:38:47 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:23:29.703 * Looking for test storage... 00:23:29.703 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:23:29.703 16:38:47 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:23:29.703 16:38:47 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:29.703 16:38:47 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:23:29.703 16:38:47 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.703 16:38:47 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.703 16:38:47 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.703 16:38:47 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.703 16:38:47 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.703 16:38:47 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.703 16:38:47 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.703 16:38:47 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.703 16:38:47 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.703 16:38:47 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.962 16:38:47 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:23:29.962 16:38:47 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:23:29.962 16:38:47 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.962 16:38:47 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.962 16:38:47 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:29.962 16:38:47 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.962 16:38:47 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:29.962 16:38:47 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.962 16:38:47 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.962 16:38:47 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.962 16:38:47 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.962 16:38:47 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.962 16:38:47 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.962 16:38:47 keyring_file -- paths/export.sh@5 -- # export PATH 00:23:29.962 16:38:47 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.962 16:38:47 keyring_file -- nvmf/common.sh@47 -- # : 0 00:23:29.962 16:38:47 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:29.962 16:38:47 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:29.962 16:38:47 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.962 16:38:47 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.962 16:38:47 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.962 16:38:47 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:29.962 16:38:47 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:29.962 16:38:47 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:29.962 16:38:47 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:23:29.962 16:38:47 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:23:29.962 16:38:47 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:23:29.962 16:38:47 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:23:29.963 16:38:47 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:23:29.963 16:38:47 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:23:29.963 16:38:47 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:29.963 16:38:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:29.963 16:38:47 keyring_file -- keyring/common.sh@17 -- # name=key0 00:23:29.963 16:38:47 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:29.963 16:38:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:29.963 16:38:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:29.963 16:38:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Y2qs7ZeoH1 00:23:29.963 16:38:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:29.963 16:38:47 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:29.963 16:38:47 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:23:29.963 16:38:47 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:29.963 16:38:47 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:29.963 16:38:47 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:23:29.963 16:38:47 keyring_file -- nvmf/common.sh@705 -- # python - 00:23:29.963 16:38:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Y2qs7ZeoH1 00:23:29.963 16:38:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Y2qs7ZeoH1 00:23:29.963 16:38:47 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Y2qs7ZeoH1 00:23:29.963 16:38:47 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:23:29.963 16:38:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:29.963 16:38:47 keyring_file -- keyring/common.sh@17 -- # name=key1 00:23:29.963 16:38:47 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:23:29.963 16:38:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:29.963 16:38:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:29.963 16:38:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.QKYgC71MpD 00:23:29.963 16:38:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:23:29.963 16:38:47 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:23:29.963 16:38:47 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:23:29.963 16:38:47 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:29.963 16:38:47 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:23:29.963 16:38:47 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:23:29.963 16:38:47 keyring_file -- nvmf/common.sh@705 -- # python - 00:23:29.963 16:38:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.QKYgC71MpD 00:23:29.963 16:38:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.QKYgC71MpD 00:23:29.963 16:38:48 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.QKYgC71MpD 00:23:29.963 16:38:48 keyring_file -- keyring/file.sh@30 -- # tgtpid=100278 00:23:29.963 16:38:48 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:29.963 16:38:48 keyring_file -- keyring/file.sh@32 -- # waitforlisten 100278 00:23:29.963 16:38:48 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 100278 ']' 00:23:29.963 16:38:48 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.963 16:38:48 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:29.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.963 16:38:48 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.963 16:38:48 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:29.963 16:38:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:29.963 [2024-07-21 16:38:48.117488] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:23:29.963 [2024-07-21 16:38:48.117587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100278 ] 00:23:30.222 [2024-07-21 16:38:48.254525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.222 [2024-07-21 16:38:48.371456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.155 16:38:49 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:31.155 16:38:49 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:23:31.155 16:38:49 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:23:31.155 16:38:49 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.155 16:38:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:31.155 [2024-07-21 16:38:49.110904] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.155 null0 00:23:31.155 [2024-07-21 16:38:49.142866] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:31.155 [2024-07-21 16:38:49.143128] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:31.155 [2024-07-21 16:38:49.150851] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:31.155 16:38:49 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.155 16:38:49 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:31.155 16:38:49 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:23:31.155 16:38:49 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:31.155 16:38:49 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:31.155 16:38:49 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:31.155 16:38:49 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:31.155 16:38:49 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:31.155 16:38:49 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:31.155 16:38:49 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.155 16:38:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:31.155 [2024-07-21 16:38:49.166859] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:23:31.155 2024/07/21 16:38:49 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:23:31.155 request: 00:23:31.155 { 00:23:31.155 "method": "nvmf_subsystem_add_listener", 00:23:31.155 "params": { 00:23:31.155 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:23:31.155 "secure_channel": false, 00:23:31.155 "listen_address": { 00:23:31.155 "trtype": "tcp", 00:23:31.155 "traddr": "127.0.0.1", 00:23:31.155 "trsvcid": "4420" 00:23:31.155 } 00:23:31.155 } 00:23:31.155 } 00:23:31.155 Got JSON-RPC error response 00:23:31.155 GoRPCClient: error on JSON-RPC call 00:23:31.155 16:38:49 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:31.155 16:38:49 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:23:31.155 16:38:49 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:31.155 16:38:49 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:31.155 16:38:49 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:31.155 16:38:49 keyring_file -- keyring/file.sh@46 -- # bperfpid=100313 00:23:31.155 16:38:49 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:23:31.155 16:38:49 keyring_file -- keyring/file.sh@48 -- # waitforlisten 100313 /var/tmp/bperf.sock 00:23:31.155 16:38:49 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 100313 ']' 00:23:31.155 16:38:49 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:31.155 16:38:49 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:31.155 16:38:49 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:31.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:31.155 16:38:49 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:31.155 16:38:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:31.155 [2024-07-21 16:38:49.233324] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:23:31.155 [2024-07-21 16:38:49.234017] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100313 ] 00:23:31.413 [2024-07-21 16:38:49.374558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.413 [2024-07-21 16:38:49.480028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.992 16:38:50 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:31.992 16:38:50 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:23:31.992 16:38:50 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Y2qs7ZeoH1 00:23:31.992 16:38:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Y2qs7ZeoH1 00:23:32.249 16:38:50 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.QKYgC71MpD 00:23:32.249 16:38:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.QKYgC71MpD 00:23:32.507 16:38:50 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:23:32.507 16:38:50 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:23:32.507 16:38:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:32.507 16:38:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:32.507 16:38:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:33.073 16:38:50 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.Y2qs7ZeoH1 == \/\t\m\p\/\t\m\p\.\Y\2\q\s\7\Z\e\o\H\1 ]] 00:23:33.073 16:38:50 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:23:33.073 16:38:50 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:23:33.073 16:38:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:33.073 16:38:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:33.073 16:38:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:33.073 16:38:51 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.QKYgC71MpD == \/\t\m\p\/\t\m\p\.\Q\K\Y\g\C\7\1\M\p\D ]] 00:23:33.073 16:38:51 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:23:33.073 16:38:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:33.073 16:38:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:33.073 16:38:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:33.073 16:38:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:33.073 16:38:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:33.343 16:38:51 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:23:33.343 16:38:51 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:23:33.343 16:38:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:33.343 16:38:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:33.343 16:38:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:33.343 16:38:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:33.343 16:38:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:33.616 16:38:51 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:23:33.616 16:38:51 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:33.616 16:38:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:33.875 [2024-07-21 16:38:52.013594] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:34.133 nvme0n1 00:23:34.133 16:38:52 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:23:34.133 16:38:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:34.133 16:38:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:34.133 16:38:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:34.133 16:38:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:34.134 16:38:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:34.391 16:38:52 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:23:34.391 16:38:52 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:23:34.391 16:38:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:34.391 16:38:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:34.391 16:38:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:34.391 16:38:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:34.391 16:38:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:34.649 16:38:52 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:23:34.649 16:38:52 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:34.649 Running I/O for 1 seconds... 00:23:35.583 00:23:35.583 Latency(us) 00:23:35.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.583 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:23:35.583 nvme0n1 : 1.01 13344.58 52.13 0.00 0.00 9560.65 5481.19 18350.08 00:23:35.583 =================================================================================================================== 00:23:35.583 Total : 13344.58 52.13 0.00 0.00 9560.65 5481.19 18350.08 00:23:35.583 0 00:23:35.841 16:38:53 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:35.841 16:38:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:36.098 16:38:54 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:23:36.098 16:38:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:36.098 16:38:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:36.098 16:38:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:36.098 16:38:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:36.098 16:38:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:36.357 16:38:54 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:23:36.357 16:38:54 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:23:36.357 16:38:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:36.357 16:38:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:36.357 16:38:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:36.357 16:38:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:36.357 16:38:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:36.357 16:38:54 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:23:36.357 16:38:54 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:36.357 16:38:54 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:23:36.357 16:38:54 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:36.357 16:38:54 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:23:36.357 16:38:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:36.357 16:38:54 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:23:36.357 16:38:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:36.357 16:38:54 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:36.357 16:38:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:36.615 [2024-07-21 16:38:54.796550] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:36.615 [2024-07-21 16:38:54.797034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacef30 (107): Transport endpoint is not connected 00:23:36.615 [2024-07-21 16:38:54.798024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacef30 (9): Bad file descriptor 00:23:36.615 [2024-07-21 16:38:54.799021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:36.615 [2024-07-21 16:38:54.799044] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:23:36.615 [2024-07-21 16:38:54.799053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:36.615 2024/07/21 16:38:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:36.615 request: 00:23:36.615 { 00:23:36.615 "method": "bdev_nvme_attach_controller", 00:23:36.615 "params": { 00:23:36.615 "name": "nvme0", 00:23:36.615 "trtype": "tcp", 00:23:36.615 "traddr": "127.0.0.1", 00:23:36.615 "adrfam": "ipv4", 00:23:36.615 "trsvcid": "4420", 00:23:36.615 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:36.615 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:36.615 "prchk_reftag": false, 00:23:36.615 "prchk_guard": false, 00:23:36.615 "hdgst": false, 00:23:36.615 "ddgst": false, 00:23:36.615 "psk": "key1" 00:23:36.615 } 00:23:36.615 } 00:23:36.615 Got JSON-RPC error response 00:23:36.615 GoRPCClient: error on JSON-RPC call 00:23:36.615 16:38:54 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:23:36.615 16:38:54 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:36.615 16:38:54 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:36.615 16:38:54 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:36.615 16:38:54 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:23:36.873 16:38:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:36.873 16:38:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:36.873 16:38:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:36.873 16:38:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:36.873 16:38:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:37.131 16:38:55 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:23:37.131 16:38:55 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:23:37.131 16:38:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:37.131 16:38:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:37.131 16:38:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:37.131 16:38:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:37.131 16:38:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:37.131 16:38:55 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:23:37.131 16:38:55 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:23:37.131 16:38:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:37.388 16:38:55 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:23:37.388 16:38:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:23:37.647 16:38:55 keyring_file -- keyring/file.sh@77 -- # jq length 00:23:37.647 16:38:55 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:23:37.647 16:38:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:37.905 16:38:56 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:23:37.905 16:38:56 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.Y2qs7ZeoH1 00:23:37.905 16:38:56 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Y2qs7ZeoH1 00:23:37.905 16:38:56 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:23:37.905 16:38:56 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Y2qs7ZeoH1 00:23:37.905 16:38:56 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:23:37.905 16:38:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:37.905 16:38:56 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:23:37.905 16:38:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:37.905 16:38:56 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Y2qs7ZeoH1 00:23:37.905 16:38:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Y2qs7ZeoH1 00:23:38.164 [2024-07-21 16:38:56.289616] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Y2qs7ZeoH1': 0100660 00:23:38.164 [2024-07-21 16:38:56.289655] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:38.164 2024/07/21 16:38:56 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.Y2qs7ZeoH1], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:23:38.164 request: 00:23:38.164 { 00:23:38.164 "method": "keyring_file_add_key", 00:23:38.164 "params": { 00:23:38.164 "name": "key0", 00:23:38.164 "path": "/tmp/tmp.Y2qs7ZeoH1" 00:23:38.164 } 00:23:38.164 } 00:23:38.164 Got JSON-RPC error response 00:23:38.164 GoRPCClient: error on JSON-RPC call 00:23:38.164 16:38:56 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:23:38.164 16:38:56 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:38.164 16:38:56 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:38.164 16:38:56 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:38.164 16:38:56 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.Y2qs7ZeoH1 00:23:38.164 16:38:56 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Y2qs7ZeoH1 00:23:38.164 16:38:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Y2qs7ZeoH1 00:23:38.421 16:38:56 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.Y2qs7ZeoH1 00:23:38.421 16:38:56 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:23:38.421 16:38:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:38.421 16:38:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:38.421 16:38:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:38.421 16:38:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:38.421 16:38:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:38.678 16:38:56 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:23:38.679 16:38:56 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:38.679 16:38:56 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:23:38.679 16:38:56 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:38.679 16:38:56 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:23:38.679 16:38:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:38.679 16:38:56 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:23:38.679 16:38:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:38.679 16:38:56 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:38.679 16:38:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:38.936 [2024-07-21 16:38:57.009745] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Y2qs7ZeoH1': No such file or directory 00:23:38.936 [2024-07-21 16:38:57.009772] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:23:38.936 [2024-07-21 16:38:57.009794] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:23:38.936 [2024-07-21 16:38:57.009802] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:38.936 [2024-07-21 16:38:57.009810] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:23:38.936 2024/07/21 16:38:57 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:23:38.936 request: 00:23:38.936 { 00:23:38.936 "method": "bdev_nvme_attach_controller", 00:23:38.936 "params": { 00:23:38.936 "name": "nvme0", 00:23:38.936 "trtype": "tcp", 00:23:38.936 "traddr": "127.0.0.1", 00:23:38.936 "adrfam": "ipv4", 00:23:38.936 "trsvcid": "4420", 00:23:38.936 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:38.936 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:38.936 "prchk_reftag": false, 00:23:38.936 "prchk_guard": false, 00:23:38.936 "hdgst": false, 00:23:38.936 "ddgst": false, 00:23:38.936 "psk": "key0" 00:23:38.936 } 00:23:38.936 } 00:23:38.936 Got JSON-RPC error response 00:23:38.936 GoRPCClient: error on JSON-RPC call 00:23:38.936 16:38:57 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:23:38.936 16:38:57 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:38.936 16:38:57 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:38.936 16:38:57 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:38.936 16:38:57 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:23:38.936 16:38:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:39.193 16:38:57 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:39.193 16:38:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:39.193 16:38:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:23:39.193 16:38:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:39.193 16:38:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:39.193 16:38:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:39.193 16:38:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.oiahpZfwyW 00:23:39.193 16:38:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:39.193 16:38:57 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:39.193 16:38:57 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:23:39.193 16:38:57 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:39.193 16:38:57 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:39.193 16:38:57 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:23:39.193 16:38:57 keyring_file -- nvmf/common.sh@705 -- # python - 00:23:39.193 16:38:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.oiahpZfwyW 00:23:39.193 16:38:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.oiahpZfwyW 00:23:39.193 16:38:57 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.oiahpZfwyW 00:23:39.193 16:38:57 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oiahpZfwyW 00:23:39.193 16:38:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oiahpZfwyW 00:23:39.486 16:38:57 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:39.486 16:38:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:39.743 nvme0n1 00:23:39.744 16:38:57 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:23:39.744 16:38:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:39.744 16:38:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:39.744 16:38:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:39.744 16:38:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:39.744 16:38:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:40.001 16:38:58 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:23:40.001 16:38:58 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:23:40.001 16:38:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:40.259 16:38:58 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:23:40.259 16:38:58 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:23:40.259 16:38:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:40.259 16:38:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:40.259 16:38:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:40.516 16:38:58 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:23:40.516 16:38:58 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:23:40.516 16:38:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:40.516 16:38:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:40.516 16:38:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:40.516 16:38:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:40.516 16:38:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:40.774 16:38:58 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:23:40.774 16:38:58 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:40.774 16:38:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:41.033 16:38:59 keyring_file -- keyring/file.sh@104 -- # jq length 00:23:41.033 16:38:59 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:23:41.033 16:38:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:41.599 16:38:59 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:23:41.599 16:38:59 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.oiahpZfwyW 00:23:41.599 16:38:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.oiahpZfwyW 00:23:41.599 16:38:59 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.QKYgC71MpD 00:23:41.599 16:38:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.QKYgC71MpD 00:23:41.868 16:38:59 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:41.868 16:38:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:42.142 nvme0n1 00:23:42.142 16:39:00 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:23:42.142 16:39:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:23:42.401 16:39:00 keyring_file -- keyring/file.sh@112 -- # config='{ 00:23:42.401 "subsystems": [ 00:23:42.401 { 00:23:42.401 "subsystem": "keyring", 00:23:42.401 "config": [ 00:23:42.401 { 00:23:42.401 "method": "keyring_file_add_key", 00:23:42.401 "params": { 00:23:42.401 "name": "key0", 00:23:42.401 "path": "/tmp/tmp.oiahpZfwyW" 00:23:42.401 } 00:23:42.401 }, 00:23:42.401 { 00:23:42.401 "method": "keyring_file_add_key", 00:23:42.401 "params": { 00:23:42.401 "name": "key1", 00:23:42.401 "path": "/tmp/tmp.QKYgC71MpD" 00:23:42.401 } 00:23:42.401 } 00:23:42.401 ] 00:23:42.401 }, 00:23:42.401 { 00:23:42.401 "subsystem": "iobuf", 00:23:42.401 "config": [ 00:23:42.401 { 00:23:42.401 "method": "iobuf_set_options", 00:23:42.401 "params": { 00:23:42.401 "large_bufsize": 135168, 00:23:42.401 "large_pool_count": 1024, 00:23:42.401 "small_bufsize": 8192, 00:23:42.401 "small_pool_count": 8192 00:23:42.401 } 00:23:42.401 } 00:23:42.401 ] 00:23:42.401 }, 00:23:42.401 { 00:23:42.401 "subsystem": "sock", 00:23:42.401 "config": [ 00:23:42.401 { 00:23:42.401 "method": "sock_set_default_impl", 00:23:42.401 "params": { 00:23:42.401 "impl_name": "posix" 00:23:42.401 } 00:23:42.401 }, 00:23:42.401 { 00:23:42.401 "method": "sock_impl_set_options", 00:23:42.401 "params": { 00:23:42.401 "enable_ktls": false, 00:23:42.401 "enable_placement_id": 0, 00:23:42.401 "enable_quickack": false, 00:23:42.401 "enable_recv_pipe": true, 00:23:42.401 "enable_zerocopy_send_client": false, 00:23:42.401 "enable_zerocopy_send_server": true, 00:23:42.401 "impl_name": "ssl", 00:23:42.401 "recv_buf_size": 4096, 00:23:42.401 "send_buf_size": 4096, 00:23:42.401 "tls_version": 0, 00:23:42.401 "zerocopy_threshold": 0 00:23:42.401 } 00:23:42.401 }, 00:23:42.401 { 00:23:42.401 "method": "sock_impl_set_options", 00:23:42.401 "params": { 00:23:42.401 "enable_ktls": false, 00:23:42.401 "enable_placement_id": 0, 00:23:42.401 "enable_quickack": false, 00:23:42.401 "enable_recv_pipe": true, 00:23:42.401 "enable_zerocopy_send_client": false, 00:23:42.401 "enable_zerocopy_send_server": true, 00:23:42.401 "impl_name": "posix", 00:23:42.401 "recv_buf_size": 2097152, 00:23:42.401 "send_buf_size": 2097152, 00:23:42.401 "tls_version": 0, 00:23:42.401 "zerocopy_threshold": 0 00:23:42.401 } 00:23:42.401 } 00:23:42.401 ] 00:23:42.401 }, 00:23:42.401 { 00:23:42.401 "subsystem": "vmd", 00:23:42.401 "config": [] 00:23:42.401 }, 00:23:42.401 { 00:23:42.401 "subsystem": "accel", 00:23:42.401 "config": [ 00:23:42.401 { 00:23:42.401 "method": "accel_set_options", 00:23:42.401 "params": { 00:23:42.401 "buf_count": 2048, 00:23:42.401 "large_cache_size": 16, 00:23:42.401 "sequence_count": 2048, 00:23:42.401 "small_cache_size": 128, 00:23:42.401 "task_count": 2048 00:23:42.401 } 00:23:42.401 } 00:23:42.401 ] 00:23:42.401 }, 00:23:42.401 { 00:23:42.401 "subsystem": "bdev", 00:23:42.401 "config": [ 00:23:42.401 { 00:23:42.401 "method": "bdev_set_options", 00:23:42.401 "params": { 00:23:42.401 "bdev_auto_examine": true, 00:23:42.401 "bdev_io_cache_size": 256, 00:23:42.401 "bdev_io_pool_size": 65535, 00:23:42.401 "iobuf_large_cache_size": 16, 00:23:42.401 "iobuf_small_cache_size": 128 00:23:42.401 } 00:23:42.401 }, 00:23:42.401 { 00:23:42.401 "method": "bdev_raid_set_options", 00:23:42.401 "params": { 00:23:42.401 "process_max_bandwidth_mb_sec": 0, 00:23:42.401 "process_window_size_kb": 1024 00:23:42.401 } 00:23:42.401 }, 00:23:42.401 { 00:23:42.401 "method": "bdev_iscsi_set_options", 00:23:42.401 "params": { 00:23:42.401 "timeout_sec": 30 00:23:42.401 } 00:23:42.401 }, 00:23:42.401 { 00:23:42.401 "method": "bdev_nvme_set_options", 00:23:42.401 "params": { 00:23:42.401 "action_on_timeout": "none", 00:23:42.401 "allow_accel_sequence": false, 00:23:42.401 "arbitration_burst": 0, 00:23:42.401 "bdev_retry_count": 3, 00:23:42.401 "ctrlr_loss_timeout_sec": 0, 00:23:42.401 "delay_cmd_submit": true, 00:23:42.401 "dhchap_dhgroups": [ 00:23:42.401 "null", 00:23:42.401 "ffdhe2048", 00:23:42.401 "ffdhe3072", 00:23:42.401 "ffdhe4096", 00:23:42.401 "ffdhe6144", 00:23:42.401 "ffdhe8192" 00:23:42.401 ], 00:23:42.401 "dhchap_digests": [ 00:23:42.401 "sha256", 00:23:42.401 "sha384", 00:23:42.401 "sha512" 00:23:42.401 ], 00:23:42.401 "disable_auto_failback": false, 00:23:42.401 "fast_io_fail_timeout_sec": 0, 00:23:42.401 "generate_uuids": false, 00:23:42.401 "high_priority_weight": 0, 00:23:42.401 "io_path_stat": false, 00:23:42.401 "io_queue_requests": 512, 00:23:42.401 "keep_alive_timeout_ms": 10000, 00:23:42.401 "low_priority_weight": 0, 00:23:42.401 "medium_priority_weight": 0, 00:23:42.401 "nvme_adminq_poll_period_us": 10000, 00:23:42.401 "nvme_error_stat": false, 00:23:42.401 "nvme_ioq_poll_period_us": 0, 00:23:42.401 "rdma_cm_event_timeout_ms": 0, 00:23:42.401 "rdma_max_cq_size": 0, 00:23:42.401 "rdma_srq_size": 0, 00:23:42.401 "reconnect_delay_sec": 0, 00:23:42.401 "timeout_admin_us": 0, 00:23:42.401 "timeout_us": 0, 00:23:42.401 "transport_ack_timeout": 0, 00:23:42.401 "transport_retry_count": 4, 00:23:42.401 "transport_tos": 0 00:23:42.401 } 00:23:42.401 }, 00:23:42.401 { 00:23:42.401 "method": "bdev_nvme_attach_controller", 00:23:42.401 "params": { 00:23:42.401 "adrfam": "IPv4", 00:23:42.401 "ctrlr_loss_timeout_sec": 0, 00:23:42.401 "ddgst": false, 00:23:42.401 "fast_io_fail_timeout_sec": 0, 00:23:42.401 "hdgst": false, 00:23:42.401 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:42.401 "name": "nvme0", 00:23:42.401 "prchk_guard": false, 00:23:42.401 "prchk_reftag": false, 00:23:42.401 "psk": "key0", 00:23:42.401 "reconnect_delay_sec": 0, 00:23:42.401 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:42.401 "traddr": "127.0.0.1", 00:23:42.401 "trsvcid": "4420", 00:23:42.401 "trtype": "TCP" 00:23:42.401 } 00:23:42.401 }, 00:23:42.401 { 00:23:42.401 "method": "bdev_nvme_set_hotplug", 00:23:42.401 "params": { 00:23:42.401 "enable": false, 00:23:42.401 "period_us": 100000 00:23:42.401 } 00:23:42.401 }, 00:23:42.401 { 00:23:42.401 "method": "bdev_wait_for_examine" 00:23:42.401 } 00:23:42.401 ] 00:23:42.401 }, 00:23:42.401 { 00:23:42.401 "subsystem": "nbd", 00:23:42.401 "config": [] 00:23:42.401 } 00:23:42.401 ] 00:23:42.401 }' 00:23:42.401 16:39:00 keyring_file -- keyring/file.sh@114 -- # killprocess 100313 00:23:42.401 16:39:00 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 100313 ']' 00:23:42.401 16:39:00 keyring_file -- common/autotest_common.sh@952 -- # kill -0 100313 00:23:42.401 16:39:00 keyring_file -- common/autotest_common.sh@953 -- # uname 00:23:42.401 16:39:00 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:42.401 16:39:00 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100313 00:23:42.401 16:39:00 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:42.401 16:39:00 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:42.401 killing process with pid 100313 00:23:42.401 16:39:00 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100313' 00:23:42.401 16:39:00 keyring_file -- common/autotest_common.sh@967 -- # kill 100313 00:23:42.401 Received shutdown signal, test time was about 1.000000 seconds 00:23:42.401 00:23:42.401 Latency(us) 00:23:42.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.401 =================================================================================================================== 00:23:42.401 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:42.402 16:39:00 keyring_file -- common/autotest_common.sh@972 -- # wait 100313 00:23:42.661 16:39:00 keyring_file -- keyring/file.sh@117 -- # bperfpid=100779 00:23:42.661 16:39:00 keyring_file -- keyring/file.sh@119 -- # waitforlisten 100779 /var/tmp/bperf.sock 00:23:42.661 16:39:00 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 100779 ']' 00:23:42.661 16:39:00 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:42.661 16:39:00 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:42.661 16:39:00 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:23:42.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:42.661 16:39:00 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:42.661 16:39:00 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:42.661 16:39:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:42.661 16:39:00 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:23:42.661 "subsystems": [ 00:23:42.661 { 00:23:42.661 "subsystem": "keyring", 00:23:42.661 "config": [ 00:23:42.661 { 00:23:42.661 "method": "keyring_file_add_key", 00:23:42.661 "params": { 00:23:42.661 "name": "key0", 00:23:42.661 "path": "/tmp/tmp.oiahpZfwyW" 00:23:42.661 } 00:23:42.661 }, 00:23:42.661 { 00:23:42.661 "method": "keyring_file_add_key", 00:23:42.661 "params": { 00:23:42.661 "name": "key1", 00:23:42.661 "path": "/tmp/tmp.QKYgC71MpD" 00:23:42.661 } 00:23:42.661 } 00:23:42.661 ] 00:23:42.661 }, 00:23:42.661 { 00:23:42.661 "subsystem": "iobuf", 00:23:42.661 "config": [ 00:23:42.661 { 00:23:42.661 "method": "iobuf_set_options", 00:23:42.661 "params": { 00:23:42.661 "large_bufsize": 135168, 00:23:42.661 "large_pool_count": 1024, 00:23:42.661 "small_bufsize": 8192, 00:23:42.661 "small_pool_count": 8192 00:23:42.661 } 00:23:42.661 } 00:23:42.661 ] 00:23:42.661 }, 00:23:42.661 { 00:23:42.661 "subsystem": "sock", 00:23:42.661 "config": [ 00:23:42.661 { 00:23:42.661 "method": "sock_set_default_impl", 00:23:42.661 "params": { 00:23:42.661 "impl_name": "posix" 00:23:42.661 } 00:23:42.661 }, 00:23:42.661 { 00:23:42.661 "method": "sock_impl_set_options", 00:23:42.661 "params": { 00:23:42.661 "enable_ktls": false, 00:23:42.661 "enable_placement_id": 0, 00:23:42.661 "enable_quickack": false, 00:23:42.661 "enable_recv_pipe": true, 00:23:42.661 "enable_zerocopy_send_client": false, 00:23:42.661 "enable_zerocopy_send_server": true, 00:23:42.661 "impl_name": "ssl", 00:23:42.661 "recv_buf_size": 4096, 00:23:42.661 "send_buf_size": 4096, 00:23:42.661 "tls_version": 0, 00:23:42.662 "zerocopy_threshold": 0 00:23:42.662 } 00:23:42.662 }, 00:23:42.662 { 00:23:42.662 "method": "sock_impl_set_options", 00:23:42.662 "params": { 00:23:42.662 "enable_ktls": false, 00:23:42.662 "enable_placement_id": 0, 00:23:42.662 "enable_quickack": false, 00:23:42.662 "enable_recv_pipe": true, 00:23:42.662 "enable_zerocopy_send_client": false, 00:23:42.662 "enable_zerocopy_send_server": true, 00:23:42.662 "impl_name": "posix", 00:23:42.662 "recv_buf_size": 2097152, 00:23:42.662 "send_buf_size": 2097152, 00:23:42.662 "tls_version": 0, 00:23:42.662 "zerocopy_threshold": 0 00:23:42.662 } 00:23:42.662 } 00:23:42.662 ] 00:23:42.662 }, 00:23:42.662 { 00:23:42.662 "subsystem": "vmd", 00:23:42.662 "config": [] 00:23:42.662 }, 00:23:42.662 { 00:23:42.662 "subsystem": "accel", 00:23:42.662 "config": [ 00:23:42.662 { 00:23:42.662 "method": "accel_set_options", 00:23:42.662 "params": { 00:23:42.662 "buf_count": 2048, 00:23:42.662 "large_cache_size": 16, 00:23:42.662 "sequence_count": 2048, 00:23:42.662 "small_cache_size": 128, 00:23:42.662 "task_count": 2048 00:23:42.662 } 00:23:42.662 } 00:23:42.662 ] 00:23:42.662 }, 00:23:42.662 { 00:23:42.662 "subsystem": "bdev", 00:23:42.662 "config": [ 00:23:42.662 { 00:23:42.662 "method": "bdev_set_options", 00:23:42.662 "params": { 00:23:42.662 "bdev_auto_examine": true, 00:23:42.662 "bdev_io_cache_size": 256, 00:23:42.662 "bdev_io_pool_size": 65535, 00:23:42.662 "iobuf_large_cache_size": 16, 00:23:42.662 "iobuf_small_cache_size": 128 00:23:42.662 } 00:23:42.662 }, 00:23:42.662 { 00:23:42.662 "method": "bdev_raid_set_options", 00:23:42.662 "params": { 00:23:42.662 "process_max_bandwidth_mb_sec": 0, 00:23:42.662 "process_window_size_kb": 1024 00:23:42.662 } 00:23:42.662 }, 00:23:42.662 { 00:23:42.662 "method": "bdev_iscsi_set_options", 00:23:42.662 "params": { 00:23:42.662 "timeout_sec": 30 00:23:42.662 } 00:23:42.662 }, 00:23:42.662 { 00:23:42.662 "method": "bdev_nvme_set_options", 00:23:42.662 "params": { 00:23:42.662 "action_on_timeout": "none", 00:23:42.662 "allow_accel_sequence": false, 00:23:42.662 "arbitration_burst": 0, 00:23:42.662 "bdev_retry_count": 3, 00:23:42.662 "ctrlr_loss_timeout_sec": 0, 00:23:42.662 "delay_cmd_submit": true, 00:23:42.662 "dhchap_dhgroups": [ 00:23:42.662 "null", 00:23:42.662 "ffdhe2048", 00:23:42.662 "ffdhe3072", 00:23:42.662 "ffdhe4096", 00:23:42.662 "ffdhe6144", 00:23:42.662 "ffdhe8192" 00:23:42.662 ], 00:23:42.662 "dhchap_digests": [ 00:23:42.662 "sha256", 00:23:42.662 "sha384", 00:23:42.662 "sha512" 00:23:42.662 ], 00:23:42.662 "disable_auto_failback": false, 00:23:42.662 "fast_io_fail_timeout_sec": 0, 00:23:42.662 "generate_uuids": false, 00:23:42.662 "high_priority_weight": 0, 00:23:42.662 "io_path_stat": false, 00:23:42.662 "io_queue_requests": 512, 00:23:42.662 "keep_alive_timeout_ms": 10000, 00:23:42.662 "low_priority_weight": 0, 00:23:42.662 "medium_priority_weight": 0, 00:23:42.662 "nvme_adminq_poll_period_us": 10000, 00:23:42.662 "nvme_error_stat": false, 00:23:42.662 "nvme_ioq_poll_period_us": 0, 00:23:42.662 "rdma_cm_event_timeout_ms": 0, 00:23:42.662 "rdma_max_cq_size": 0, 00:23:42.662 "rdma_srq_size": 0, 00:23:42.662 "reconnect_delay_sec": 0, 00:23:42.662 "timeout_admin_us": 0, 00:23:42.662 "timeout_us": 0, 00:23:42.662 "transport_ack_timeout": 0, 00:23:42.662 "transport_retry_count": 4, 00:23:42.662 "transport_tos": 0 00:23:42.662 } 00:23:42.662 }, 00:23:42.662 { 00:23:42.662 "method": "bdev_nvme_attach_controller", 00:23:42.662 "params": { 00:23:42.662 "adrfam": "IPv4", 00:23:42.662 "ctrlr_loss_timeout_sec": 0, 00:23:42.662 "ddgst": false, 00:23:42.662 "fast_io_fail_timeout_sec": 0, 00:23:42.662 "hdgst": false, 00:23:42.662 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:42.662 "name": "nvme0", 00:23:42.662 "prchk_guard": false, 00:23:42.662 "prchk_reftag": false, 00:23:42.662 "psk": "key0", 00:23:42.662 "reconnect_delay_sec": 0, 00:23:42.662 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:42.662 "traddr": "127.0.0.1", 00:23:42.662 "trsvcid": "4420", 00:23:42.662 "trtype": "TCP" 00:23:42.662 } 00:23:42.662 }, 00:23:42.662 { 00:23:42.662 "method": "bdev_nvme_set_hotplug", 00:23:42.662 "params": { 00:23:42.662 "enable": false, 00:23:42.662 "period_us": 100000 00:23:42.662 } 00:23:42.662 }, 00:23:42.662 { 00:23:42.662 "method": "bdev_wait_for_examine" 00:23:42.662 } 00:23:42.662 ] 00:23:42.662 }, 00:23:42.662 { 00:23:42.662 "subsystem": "nbd", 00:23:42.662 "config": [] 00:23:42.662 } 00:23:42.662 ] 00:23:42.662 }' 00:23:42.921 [2024-07-21 16:39:00.886083] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:23:42.921 [2024-07-21 16:39:00.886177] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100779 ] 00:23:42.921 [2024-07-21 16:39:01.022480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.921 [2024-07-21 16:39:01.102773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.180 [2024-07-21 16:39:01.307456] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:43.748 16:39:01 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:43.748 16:39:01 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:23:43.748 16:39:01 keyring_file -- keyring/file.sh@120 -- # jq length 00:23:43.748 16:39:01 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:23:43.748 16:39:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:44.007 16:39:02 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:23:44.007 16:39:02 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:23:44.007 16:39:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:44.007 16:39:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:44.007 16:39:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:44.007 16:39:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:44.007 16:39:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:44.266 16:39:02 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:23:44.266 16:39:02 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:23:44.266 16:39:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:44.266 16:39:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:44.266 16:39:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:44.266 16:39:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:44.266 16:39:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:44.524 16:39:02 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:23:44.524 16:39:02 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:23:44.524 16:39:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:23:44.524 16:39:02 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:23:44.782 16:39:02 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:23:44.782 16:39:02 keyring_file -- keyring/file.sh@1 -- # cleanup 00:23:44.782 16:39:02 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.oiahpZfwyW /tmp/tmp.QKYgC71MpD 00:23:44.782 16:39:02 keyring_file -- keyring/file.sh@20 -- # killprocess 100779 00:23:44.782 16:39:02 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 100779 ']' 00:23:44.782 16:39:02 keyring_file -- common/autotest_common.sh@952 -- # kill -0 100779 00:23:44.782 16:39:02 keyring_file -- common/autotest_common.sh@953 -- # uname 00:23:44.782 16:39:02 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:44.782 16:39:02 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100779 00:23:44.782 16:39:02 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:44.782 16:39:02 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:44.782 16:39:02 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100779' 00:23:44.782 killing process with pid 100779 00:23:44.782 16:39:02 keyring_file -- common/autotest_common.sh@967 -- # kill 100779 00:23:44.782 Received shutdown signal, test time was about 1.000000 seconds 00:23:44.782 00:23:44.782 Latency(us) 00:23:44.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.782 =================================================================================================================== 00:23:44.782 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:44.782 16:39:02 keyring_file -- common/autotest_common.sh@972 -- # wait 100779 00:23:45.040 16:39:03 keyring_file -- keyring/file.sh@21 -- # killprocess 100278 00:23:45.040 16:39:03 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 100278 ']' 00:23:45.040 16:39:03 keyring_file -- common/autotest_common.sh@952 -- # kill -0 100278 00:23:45.040 16:39:03 keyring_file -- common/autotest_common.sh@953 -- # uname 00:23:45.040 16:39:03 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:45.040 16:39:03 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100278 00:23:45.040 16:39:03 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:45.040 16:39:03 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:45.040 killing process with pid 100278 00:23:45.040 16:39:03 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100278' 00:23:45.040 16:39:03 keyring_file -- common/autotest_common.sh@967 -- # kill 100278 00:23:45.040 [2024-07-21 16:39:03.210060] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:45.040 16:39:03 keyring_file -- common/autotest_common.sh@972 -- # wait 100278 00:23:45.605 00:23:45.605 real 0m15.899s 00:23:45.605 user 0m39.019s 00:23:45.605 sys 0m3.371s 00:23:45.605 16:39:03 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:45.605 ************************************ 00:23:45.605 END TEST keyring_file 00:23:45.605 ************************************ 00:23:45.605 16:39:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:45.605 16:39:03 -- common/autotest_common.sh@1142 -- # return 0 00:23:45.605 16:39:03 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:23:45.605 16:39:03 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:23:45.605 16:39:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:45.605 16:39:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:45.605 16:39:03 -- common/autotest_common.sh@10 -- # set +x 00:23:45.605 ************************************ 00:23:45.605 START TEST keyring_linux 00:23:45.605 ************************************ 00:23:45.605 16:39:03 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:23:45.863 * Looking for test storage... 00:23:45.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:23:45.863 16:39:03 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:23:45.863 16:39:03 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:45.863 16:39:03 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:23:45.863 16:39:03 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:45.863 16:39:03 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:45.863 16:39:03 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:45.863 16:39:03 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:45.863 16:39:03 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=93b2ddbc-e521-435a-846f-e1bc9c67a86f 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:45.864 16:39:03 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:45.864 16:39:03 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:45.864 16:39:03 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:45.864 16:39:03 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.864 16:39:03 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.864 16:39:03 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.864 16:39:03 keyring_linux -- paths/export.sh@5 -- # export PATH 00:23:45.864 16:39:03 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:45.864 16:39:03 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:23:45.864 16:39:03 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:23:45.864 16:39:03 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:23:45.864 16:39:03 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:23:45.864 16:39:03 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:23:45.864 16:39:03 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:23:45.864 16:39:03 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:23:45.864 16:39:03 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:23:45.864 16:39:03 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:23:45.864 16:39:03 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:45.864 16:39:03 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:23:45.864 16:39:03 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:23:45.864 16:39:03 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@705 -- # python - 00:23:45.864 16:39:03 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:23:45.864 /tmp/:spdk-test:key0 00:23:45.864 16:39:03 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:23:45.864 16:39:03 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:23:45.864 16:39:03 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:23:45.864 16:39:03 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:23:45.864 16:39:03 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:23:45.864 16:39:03 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:23:45.864 16:39:03 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:23:45.864 16:39:03 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:23:45.864 16:39:03 keyring_linux -- nvmf/common.sh@705 -- # python - 00:23:45.864 16:39:03 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:23:45.864 /tmp/:spdk-test:key1 00:23:45.864 16:39:03 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:23:45.864 16:39:03 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=100934 00:23:45.864 16:39:03 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:45.864 16:39:03 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 100934 00:23:45.864 16:39:03 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 100934 ']' 00:23:45.864 16:39:03 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.864 16:39:03 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:45.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:45.864 16:39:03 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.864 16:39:03 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:45.864 16:39:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:45.864 [2024-07-21 16:39:04.047652] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:23:45.864 [2024-07-21 16:39:04.047760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100934 ] 00:23:46.121 [2024-07-21 16:39:04.179478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.121 [2024-07-21 16:39:04.265732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.053 16:39:04 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:47.053 16:39:04 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:23:47.053 16:39:04 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:23:47.053 16:39:04 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.053 16:39:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:47.053 [2024-07-21 16:39:04.966561] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.053 null0 00:23:47.053 [2024-07-21 16:39:04.998521] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:47.053 [2024-07-21 16:39:04.998779] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:47.053 16:39:05 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.053 16:39:05 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:23:47.054 752966904 00:23:47.054 16:39:05 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:23:47.054 484672698 00:23:47.054 16:39:05 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=100970 00:23:47.054 16:39:05 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 100970 /var/tmp/bperf.sock 00:23:47.054 16:39:05 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:23:47.054 16:39:05 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 100970 ']' 00:23:47.054 16:39:05 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:47.054 16:39:05 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:47.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:47.054 16:39:05 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:47.054 16:39:05 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:47.054 16:39:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:47.054 [2024-07-21 16:39:05.080762] Starting SPDK v24.09-pre git sha1 89fd17309 / DPDK 24.03.0 initialization... 00:23:47.054 [2024-07-21 16:39:05.080877] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100970 ] 00:23:47.054 [2024-07-21 16:39:05.219344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.311 [2024-07-21 16:39:05.328729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.877 16:39:05 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:47.877 16:39:05 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:23:47.877 16:39:05 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:23:47.877 16:39:05 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:23:48.135 16:39:06 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:23:48.135 16:39:06 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:48.393 16:39:06 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:23:48.393 16:39:06 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:23:48.650 [2024-07-21 16:39:06.731097] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:48.650 nvme0n1 00:23:48.650 16:39:06 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:23:48.650 16:39:06 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:23:48.650 16:39:06 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:23:48.650 16:39:06 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:23:48.650 16:39:06 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:23:48.650 16:39:06 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:48.908 16:39:07 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:23:48.908 16:39:07 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:23:48.908 16:39:07 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:23:48.908 16:39:07 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:23:48.908 16:39:07 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:48.908 16:39:07 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:23:48.908 16:39:07 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:49.166 16:39:07 keyring_linux -- keyring/linux.sh@25 -- # sn=752966904 00:23:49.166 16:39:07 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:23:49.166 16:39:07 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:23:49.166 16:39:07 keyring_linux -- keyring/linux.sh@26 -- # [[ 752966904 == \7\5\2\9\6\6\9\0\4 ]] 00:23:49.166 16:39:07 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 752966904 00:23:49.166 16:39:07 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:23:49.166 16:39:07 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:49.423 Running I/O for 1 seconds... 00:23:50.356 00:23:50.356 Latency(us) 00:23:50.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.356 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:50.356 nvme0n1 : 1.01 12742.76 49.78 0.00 0.00 9989.29 6196.13 14715.81 00:23:50.356 =================================================================================================================== 00:23:50.356 Total : 12742.76 49.78 0.00 0.00 9989.29 6196.13 14715.81 00:23:50.356 0 00:23:50.356 16:39:08 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:50.356 16:39:08 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:50.614 16:39:08 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:23:50.614 16:39:08 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:23:50.614 16:39:08 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:23:50.614 16:39:08 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:23:50.614 16:39:08 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:23:50.614 16:39:08 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:50.873 16:39:08 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:23:50.873 16:39:08 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:23:50.873 16:39:08 keyring_linux -- keyring/linux.sh@23 -- # return 00:23:50.873 16:39:08 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:50.873 16:39:08 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:23:50.873 16:39:08 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:50.873 16:39:08 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:23:50.873 16:39:08 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:50.873 16:39:08 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:23:50.873 16:39:08 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:50.873 16:39:08 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:50.873 16:39:08 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:51.131 [2024-07-21 16:39:09.161769] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:51.131 [2024-07-21 16:39:09.162615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7dea0 (107): Transport endpoint is not connected 00:23:51.131 [2024-07-21 16:39:09.163606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7dea0 (9): Bad file descriptor 00:23:51.131 [2024-07-21 16:39:09.164603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:51.131 [2024-07-21 16:39:09.164623] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:23:51.131 [2024-07-21 16:39:09.164632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:51.131 2024/07/21 16:39:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:51.131 request: 00:23:51.131 { 00:23:51.131 "method": "bdev_nvme_attach_controller", 00:23:51.131 "params": { 00:23:51.131 "name": "nvme0", 00:23:51.131 "trtype": "tcp", 00:23:51.131 "traddr": "127.0.0.1", 00:23:51.131 "adrfam": "ipv4", 00:23:51.131 "trsvcid": "4420", 00:23:51.131 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:51.131 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:51.131 "prchk_reftag": false, 00:23:51.131 "prchk_guard": false, 00:23:51.131 "hdgst": false, 00:23:51.131 "ddgst": false, 00:23:51.131 "psk": ":spdk-test:key1" 00:23:51.131 } 00:23:51.131 } 00:23:51.131 Got JSON-RPC error response 00:23:51.131 GoRPCClient: error on JSON-RPC call 00:23:51.131 16:39:09 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:23:51.131 16:39:09 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:51.131 16:39:09 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:51.131 16:39:09 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:51.131 16:39:09 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:23:51.131 16:39:09 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:23:51.131 16:39:09 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:23:51.131 16:39:09 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:23:51.131 16:39:09 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:23:51.131 16:39:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:23:51.131 16:39:09 keyring_linux -- keyring/linux.sh@33 -- # sn=752966904 00:23:51.131 16:39:09 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 752966904 00:23:51.131 1 links removed 00:23:51.131 16:39:09 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:23:51.131 16:39:09 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:23:51.131 16:39:09 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:23:51.131 16:39:09 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:23:51.131 16:39:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:23:51.131 16:39:09 keyring_linux -- keyring/linux.sh@33 -- # sn=484672698 00:23:51.131 16:39:09 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 484672698 00:23:51.131 1 links removed 00:23:51.131 16:39:09 keyring_linux -- keyring/linux.sh@41 -- # killprocess 100970 00:23:51.131 16:39:09 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 100970 ']' 00:23:51.131 16:39:09 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 100970 00:23:51.131 16:39:09 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:23:51.131 16:39:09 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:51.132 16:39:09 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100970 00:23:51.132 16:39:09 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:51.132 16:39:09 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:51.132 killing process with pid 100970 00:23:51.132 16:39:09 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100970' 00:23:51.132 16:39:09 keyring_linux -- common/autotest_common.sh@967 -- # kill 100970 00:23:51.132 Received shutdown signal, test time was about 1.000000 seconds 00:23:51.132 00:23:51.132 Latency(us) 00:23:51.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.132 =================================================================================================================== 00:23:51.132 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:51.132 16:39:09 keyring_linux -- common/autotest_common.sh@972 -- # wait 100970 00:23:51.389 16:39:09 keyring_linux -- keyring/linux.sh@42 -- # killprocess 100934 00:23:51.389 16:39:09 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 100934 ']' 00:23:51.389 16:39:09 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 100934 00:23:51.389 16:39:09 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:23:51.389 16:39:09 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:51.389 16:39:09 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100934 00:23:51.389 16:39:09 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:51.389 16:39:09 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:51.389 killing process with pid 100934 00:23:51.389 16:39:09 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100934' 00:23:51.389 16:39:09 keyring_linux -- common/autotest_common.sh@967 -- # kill 100934 00:23:51.389 16:39:09 keyring_linux -- common/autotest_common.sh@972 -- # wait 100934 00:23:51.954 00:23:51.954 real 0m6.258s 00:23:51.954 user 0m11.658s 00:23:51.954 sys 0m1.706s 00:23:51.954 16:39:10 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:51.954 ************************************ 00:23:51.954 END TEST keyring_linux 00:23:51.954 16:39:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:51.954 ************************************ 00:23:51.954 16:39:10 -- common/autotest_common.sh@1142 -- # return 0 00:23:51.954 16:39:10 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:23:51.954 16:39:10 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:23:51.954 16:39:10 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:23:51.954 16:39:10 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:23:51.954 16:39:10 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:23:51.954 16:39:10 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:23:51.954 16:39:10 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:23:51.954 16:39:10 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:23:51.954 16:39:10 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:23:51.954 16:39:10 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:23:51.954 16:39:10 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:23:51.954 16:39:10 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:23:51.954 16:39:10 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:23:51.954 16:39:10 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:23:51.954 16:39:10 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:23:51.954 16:39:10 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:23:51.954 16:39:10 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:23:51.954 16:39:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:51.954 16:39:10 -- common/autotest_common.sh@10 -- # set +x 00:23:51.954 16:39:10 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:23:51.954 16:39:10 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:23:51.954 16:39:10 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:23:51.954 16:39:10 -- common/autotest_common.sh@10 -- # set +x 00:23:53.854 INFO: APP EXITING 00:23:53.854 INFO: killing all VMs 00:23:53.854 INFO: killing vhost app 00:23:53.854 INFO: EXIT DONE 00:23:54.419 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:54.419 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:23:54.419 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:23:54.984 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:54.984 Cleaning 00:23:54.984 Removing: /var/run/dpdk/spdk0/config 00:23:54.984 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:54.984 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:54.984 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:54.984 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:54.984 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:54.984 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:54.984 Removing: /var/run/dpdk/spdk1/config 00:23:54.984 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:23:54.984 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:23:54.984 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:23:54.984 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:23:54.984 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:23:54.984 Removing: /var/run/dpdk/spdk1/hugepage_info 00:23:54.984 Removing: /var/run/dpdk/spdk2/config 00:23:55.241 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:23:55.241 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:23:55.241 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:23:55.241 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:23:55.241 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:23:55.241 Removing: /var/run/dpdk/spdk2/hugepage_info 00:23:55.241 Removing: /var/run/dpdk/spdk3/config 00:23:55.241 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:23:55.241 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:23:55.241 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:23:55.241 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:23:55.241 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:23:55.241 Removing: /var/run/dpdk/spdk3/hugepage_info 00:23:55.241 Removing: /var/run/dpdk/spdk4/config 00:23:55.241 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:23:55.241 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:23:55.241 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:23:55.241 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:23:55.241 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:23:55.241 Removing: /var/run/dpdk/spdk4/hugepage_info 00:23:55.241 Removing: /dev/shm/nvmf_trace.0 00:23:55.241 Removing: /dev/shm/spdk_tgt_trace.pid60656 00:23:55.241 Removing: /var/run/dpdk/spdk0 00:23:55.241 Removing: /var/run/dpdk/spdk1 00:23:55.241 Removing: /var/run/dpdk/spdk2 00:23:55.241 Removing: /var/run/dpdk/spdk3 00:23:55.241 Removing: /var/run/dpdk/spdk4 00:23:55.241 Removing: /var/run/dpdk/spdk_pid100278 00:23:55.241 Removing: /var/run/dpdk/spdk_pid100313 00:23:55.241 Removing: /var/run/dpdk/spdk_pid100779 00:23:55.241 Removing: /var/run/dpdk/spdk_pid100934 00:23:55.241 Removing: /var/run/dpdk/spdk_pid100970 00:23:55.241 Removing: /var/run/dpdk/spdk_pid60505 00:23:55.241 Removing: /var/run/dpdk/spdk_pid60656 00:23:55.241 Removing: /var/run/dpdk/spdk_pid60922 00:23:55.241 Removing: /var/run/dpdk/spdk_pid61015 00:23:55.241 Removing: /var/run/dpdk/spdk_pid61060 00:23:55.241 Removing: /var/run/dpdk/spdk_pid61174 00:23:55.241 Removing: /var/run/dpdk/spdk_pid61205 00:23:55.241 Removing: /var/run/dpdk/spdk_pid61323 00:23:55.241 Removing: /var/run/dpdk/spdk_pid61597 00:23:55.241 Removing: /var/run/dpdk/spdk_pid61773 00:23:55.241 Removing: /var/run/dpdk/spdk_pid61857 00:23:55.241 Removing: /var/run/dpdk/spdk_pid61954 00:23:55.241 Removing: /var/run/dpdk/spdk_pid62043 00:23:55.241 Removing: /var/run/dpdk/spdk_pid62087 00:23:55.241 Removing: /var/run/dpdk/spdk_pid62117 00:23:55.241 Removing: /var/run/dpdk/spdk_pid62184 00:23:55.241 Removing: /var/run/dpdk/spdk_pid62296 00:23:55.241 Removing: /var/run/dpdk/spdk_pid62921 00:23:55.241 Removing: /var/run/dpdk/spdk_pid62985 00:23:55.241 Removing: /var/run/dpdk/spdk_pid63055 00:23:55.241 Removing: /var/run/dpdk/spdk_pid63083 00:23:55.241 Removing: /var/run/dpdk/spdk_pid63162 00:23:55.241 Removing: /var/run/dpdk/spdk_pid63190 00:23:55.241 Removing: /var/run/dpdk/spdk_pid63269 00:23:55.241 Removing: /var/run/dpdk/spdk_pid63297 00:23:55.241 Removing: /var/run/dpdk/spdk_pid63354 00:23:55.241 Removing: /var/run/dpdk/spdk_pid63384 00:23:55.241 Removing: /var/run/dpdk/spdk_pid63430 00:23:55.241 Removing: /var/run/dpdk/spdk_pid63460 00:23:55.241 Removing: /var/run/dpdk/spdk_pid63612 00:23:55.241 Removing: /var/run/dpdk/spdk_pid63642 00:23:55.241 Removing: /var/run/dpdk/spdk_pid63717 00:23:55.241 Removing: /var/run/dpdk/spdk_pid63786 00:23:55.241 Removing: /var/run/dpdk/spdk_pid63811 00:23:55.241 Removing: /var/run/dpdk/spdk_pid63876 00:23:55.241 Removing: /var/run/dpdk/spdk_pid63910 00:23:55.241 Removing: /var/run/dpdk/spdk_pid63945 00:23:55.241 Removing: /var/run/dpdk/spdk_pid63979 00:23:55.241 Removing: /var/run/dpdk/spdk_pid64014 00:23:55.241 Removing: /var/run/dpdk/spdk_pid64048 00:23:55.241 Removing: /var/run/dpdk/spdk_pid64083 00:23:55.241 Removing: /var/run/dpdk/spdk_pid64117 00:23:55.241 Removing: /var/run/dpdk/spdk_pid64152 00:23:55.241 Removing: /var/run/dpdk/spdk_pid64186 00:23:55.241 Removing: /var/run/dpdk/spdk_pid64223 00:23:55.241 Removing: /var/run/dpdk/spdk_pid64262 00:23:55.241 Removing: /var/run/dpdk/spdk_pid64292 00:23:55.241 Removing: /var/run/dpdk/spdk_pid64332 00:23:55.241 Removing: /var/run/dpdk/spdk_pid64361 00:23:55.241 Removing: /var/run/dpdk/spdk_pid64397 00:23:55.241 Removing: /var/run/dpdk/spdk_pid64431 00:23:55.241 Removing: /var/run/dpdk/spdk_pid64475 00:23:55.500 Removing: /var/run/dpdk/spdk_pid64508 00:23:55.500 Removing: /var/run/dpdk/spdk_pid64547 00:23:55.500 Removing: /var/run/dpdk/spdk_pid64578 00:23:55.500 Removing: /var/run/dpdk/spdk_pid64648 00:23:55.500 Removing: /var/run/dpdk/spdk_pid64759 00:23:55.500 Removing: /var/run/dpdk/spdk_pid65168 00:23:55.500 Removing: /var/run/dpdk/spdk_pid68528 00:23:55.500 Removing: /var/run/dpdk/spdk_pid68872 00:23:55.500 Removing: /var/run/dpdk/spdk_pid71269 00:23:55.500 Removing: /var/run/dpdk/spdk_pid71652 00:23:55.500 Removing: /var/run/dpdk/spdk_pid71914 00:23:55.500 Removing: /var/run/dpdk/spdk_pid71960 00:23:55.500 Removing: /var/run/dpdk/spdk_pid72580 00:23:55.500 Removing: /var/run/dpdk/spdk_pid73006 00:23:55.500 Removing: /var/run/dpdk/spdk_pid73056 00:23:55.500 Removing: /var/run/dpdk/spdk_pid73421 00:23:55.500 Removing: /var/run/dpdk/spdk_pid73941 00:23:55.500 Removing: /var/run/dpdk/spdk_pid74385 00:23:55.500 Removing: /var/run/dpdk/spdk_pid75349 00:23:55.500 Removing: /var/run/dpdk/spdk_pid76329 00:23:55.500 Removing: /var/run/dpdk/spdk_pid76447 00:23:55.500 Removing: /var/run/dpdk/spdk_pid76521 00:23:55.500 Removing: /var/run/dpdk/spdk_pid77975 00:23:55.500 Removing: /var/run/dpdk/spdk_pid78198 00:23:55.500 Removing: /var/run/dpdk/spdk_pid83384 00:23:55.500 Removing: /var/run/dpdk/spdk_pid83813 00:23:55.500 Removing: /var/run/dpdk/spdk_pid83923 00:23:55.500 Removing: /var/run/dpdk/spdk_pid84069 00:23:55.500 Removing: /var/run/dpdk/spdk_pid84115 00:23:55.500 Removing: /var/run/dpdk/spdk_pid84159 00:23:55.500 Removing: /var/run/dpdk/spdk_pid84206 00:23:55.500 Removing: /var/run/dpdk/spdk_pid84366 00:23:55.500 Removing: /var/run/dpdk/spdk_pid84519 00:23:55.500 Removing: /var/run/dpdk/spdk_pid84768 00:23:55.500 Removing: /var/run/dpdk/spdk_pid84885 00:23:55.500 Removing: /var/run/dpdk/spdk_pid85134 00:23:55.500 Removing: /var/run/dpdk/spdk_pid85254 00:23:55.500 Removing: /var/run/dpdk/spdk_pid85389 00:23:55.500 Removing: /var/run/dpdk/spdk_pid85733 00:23:55.500 Removing: /var/run/dpdk/spdk_pid86151 00:23:55.500 Removing: /var/run/dpdk/spdk_pid86458 00:23:55.500 Removing: /var/run/dpdk/spdk_pid86951 00:23:55.500 Removing: /var/run/dpdk/spdk_pid86959 00:23:55.500 Removing: /var/run/dpdk/spdk_pid87295 00:23:55.500 Removing: /var/run/dpdk/spdk_pid87315 00:23:55.500 Removing: /var/run/dpdk/spdk_pid87329 00:23:55.500 Removing: /var/run/dpdk/spdk_pid87360 00:23:55.500 Removing: /var/run/dpdk/spdk_pid87370 00:23:55.500 Removing: /var/run/dpdk/spdk_pid87725 00:23:55.500 Removing: /var/run/dpdk/spdk_pid87769 00:23:55.500 Removing: /var/run/dpdk/spdk_pid88107 00:23:55.500 Removing: /var/run/dpdk/spdk_pid88359 00:23:55.500 Removing: /var/run/dpdk/spdk_pid88848 00:23:55.500 Removing: /var/run/dpdk/spdk_pid89431 00:23:55.500 Removing: /var/run/dpdk/spdk_pid90777 00:23:55.500 Removing: /var/run/dpdk/spdk_pid91379 00:23:55.500 Removing: /var/run/dpdk/spdk_pid91381 00:23:55.500 Removing: /var/run/dpdk/spdk_pid93297 00:23:55.500 Removing: /var/run/dpdk/spdk_pid93389 00:23:55.500 Removing: /var/run/dpdk/spdk_pid93479 00:23:55.500 Removing: /var/run/dpdk/spdk_pid93569 00:23:55.500 Removing: /var/run/dpdk/spdk_pid93729 00:23:55.500 Removing: /var/run/dpdk/spdk_pid93819 00:23:55.500 Removing: /var/run/dpdk/spdk_pid93905 00:23:55.500 Removing: /var/run/dpdk/spdk_pid93995 00:23:55.500 Removing: /var/run/dpdk/spdk_pid94342 00:23:55.500 Removing: /var/run/dpdk/spdk_pid95033 00:23:55.500 Removing: /var/run/dpdk/spdk_pid96394 00:23:55.500 Removing: /var/run/dpdk/spdk_pid96599 00:23:55.500 Removing: /var/run/dpdk/spdk_pid96895 00:23:55.500 Removing: /var/run/dpdk/spdk_pid97194 00:23:55.500 Removing: /var/run/dpdk/spdk_pid97744 00:23:55.500 Removing: /var/run/dpdk/spdk_pid97759 00:23:55.500 Removing: /var/run/dpdk/spdk_pid98113 00:23:55.500 Removing: /var/run/dpdk/spdk_pid98272 00:23:55.500 Removing: /var/run/dpdk/spdk_pid98429 00:23:55.500 Removing: /var/run/dpdk/spdk_pid98526 00:23:55.500 Removing: /var/run/dpdk/spdk_pid98681 00:23:55.500 Removing: /var/run/dpdk/spdk_pid98790 00:23:55.500 Removing: /var/run/dpdk/spdk_pid99469 00:23:55.500 Removing: /var/run/dpdk/spdk_pid99499 00:23:55.500 Removing: /var/run/dpdk/spdk_pid99534 00:23:55.500 Removing: /var/run/dpdk/spdk_pid99788 00:23:55.500 Removing: /var/run/dpdk/spdk_pid99822 00:23:55.500 Removing: /var/run/dpdk/spdk_pid99853 00:23:55.500 Clean 00:23:55.758 16:39:13 -- common/autotest_common.sh@1451 -- # return 0 00:23:55.758 16:39:13 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:23:55.758 16:39:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:55.758 16:39:13 -- common/autotest_common.sh@10 -- # set +x 00:23:55.758 16:39:13 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:23:55.758 16:39:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:55.758 16:39:13 -- common/autotest_common.sh@10 -- # set +x 00:23:55.758 16:39:13 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:55.758 16:39:13 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:23:55.758 16:39:13 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:23:55.758 16:39:13 -- spdk/autotest.sh@391 -- # hash lcov 00:23:55.758 16:39:13 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:23:55.758 16:39:13 -- spdk/autotest.sh@393 -- # hostname 00:23:55.758 16:39:13 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:23:56.017 geninfo: WARNING: invalid characters removed from testname! 00:24:17.976 16:39:35 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:20.501 16:39:38 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:23.029 16:39:40 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:25.013 16:39:43 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:27.544 16:39:45 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:29.441 16:39:47 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:31.968 16:39:49 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:24:31.968 16:39:49 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:31.968 16:39:49 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:24:31.968 16:39:49 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.968 16:39:49 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.968 16:39:49 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.968 16:39:49 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.968 16:39:49 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.968 16:39:49 -- paths/export.sh@5 -- $ export PATH 00:24:31.968 16:39:49 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.968 16:39:49 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:24:31.968 16:39:49 -- common/autobuild_common.sh@447 -- $ date +%s 00:24:31.968 16:39:49 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721579989.XXXXXX 00:24:31.968 16:39:49 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721579989.sliwfp 00:24:31.968 16:39:49 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:24:31.968 16:39:49 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:24:31.968 16:39:49 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:24:31.968 16:39:49 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:24:31.968 16:39:49 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:24:31.968 16:39:49 -- common/autobuild_common.sh@463 -- $ get_config_params 00:24:31.968 16:39:49 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:24:31.968 16:39:49 -- common/autotest_common.sh@10 -- $ set +x 00:24:31.968 16:39:49 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:24:31.968 16:39:49 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:24:31.968 16:39:49 -- pm/common@17 -- $ local monitor 00:24:31.968 16:39:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:31.968 16:39:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:31.968 16:39:49 -- pm/common@25 -- $ sleep 1 00:24:31.968 16:39:49 -- pm/common@21 -- $ date +%s 00:24:31.968 16:39:49 -- pm/common@21 -- $ date +%s 00:24:31.968 16:39:49 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721579989 00:24:31.969 16:39:49 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721579989 00:24:31.969 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721579989_collect-vmstat.pm.log 00:24:31.969 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721579989_collect-cpu-load.pm.log 00:24:32.901 16:39:50 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:24:32.901 16:39:50 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:24:32.901 16:39:50 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:24:32.901 16:39:50 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:24:32.901 16:39:50 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:24:32.901 16:39:50 -- spdk/autopackage.sh@19 -- $ timing_finish 00:24:32.901 16:39:50 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:24:32.901 16:39:50 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:24:32.901 16:39:50 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:32.901 16:39:50 -- spdk/autopackage.sh@20 -- $ exit 0 00:24:32.901 16:39:50 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:24:32.901 16:39:50 -- pm/common@29 -- $ signal_monitor_resources TERM 00:24:32.901 16:39:50 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:24:32.901 16:39:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:32.901 16:39:50 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:24:32.901 16:39:50 -- pm/common@44 -- $ pid=102687 00:24:32.901 16:39:50 -- pm/common@50 -- $ kill -TERM 102687 00:24:32.901 16:39:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:32.901 16:39:50 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:24:32.901 16:39:50 -- pm/common@44 -- $ pid=102689 00:24:32.901 16:39:50 -- pm/common@50 -- $ kill -TERM 102689 00:24:32.901 + [[ -n 5275 ]] 00:24:32.901 + sudo kill 5275 00:24:32.912 [Pipeline] } 00:24:32.932 [Pipeline] // timeout 00:24:32.939 [Pipeline] } 00:24:32.957 [Pipeline] // stage 00:24:32.962 [Pipeline] } 00:24:32.981 [Pipeline] // catchError 00:24:32.988 [Pipeline] stage 00:24:32.990 [Pipeline] { (Stop VM) 00:24:33.001 [Pipeline] sh 00:24:33.277 + vagrant halt 00:24:36.557 ==> default: Halting domain... 00:24:43.118 [Pipeline] sh 00:24:43.392 + vagrant destroy -f 00:24:46.666 ==> default: Removing domain... 00:24:46.678 [Pipeline] sh 00:24:46.957 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:24:46.966 [Pipeline] } 00:24:46.984 [Pipeline] // stage 00:24:46.988 [Pipeline] } 00:24:47.008 [Pipeline] // dir 00:24:47.014 [Pipeline] } 00:24:47.032 [Pipeline] // wrap 00:24:47.039 [Pipeline] } 00:24:47.055 [Pipeline] // catchError 00:24:47.065 [Pipeline] stage 00:24:47.067 [Pipeline] { (Epilogue) 00:24:47.082 [Pipeline] sh 00:24:47.391 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:24:52.668 [Pipeline] catchError 00:24:52.670 [Pipeline] { 00:24:52.689 [Pipeline] sh 00:24:52.974 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:24:52.974 Artifacts sizes are good 00:24:52.983 [Pipeline] } 00:24:53.001 [Pipeline] // catchError 00:24:53.014 [Pipeline] archiveArtifacts 00:24:53.022 Archiving artifacts 00:24:53.205 [Pipeline] cleanWs 00:24:53.220 [WS-CLEANUP] Deleting project workspace... 00:24:53.220 [WS-CLEANUP] Deferred wipeout is used... 00:24:53.247 [WS-CLEANUP] done 00:24:53.249 [Pipeline] } 00:24:53.269 [Pipeline] // stage 00:24:53.276 [Pipeline] } 00:24:53.293 [Pipeline] // node 00:24:53.299 [Pipeline] End of Pipeline 00:24:53.334 Finished: SUCCESS